您好,欢迎来到爱问旅游网。
搜索
您的当前位置:首页CorridorVisualodo

CorridorVisualodo

来源:爱问旅游网
Downloaded from orbit.dtu.dk on: May 26, 2016

Mobile Robot Navigation in a Corridor Using Visual Odometry

Bayramoglu, Enis; Andersen, Nils Axel; Poulsen, Niels Kjølstad; Andersen, Jens Christian; Ravn, OlePublished in:

Prooceedings of the 14th International Conference on Advanced Robotics

Publication date:2009

Document Version

Publisher's PDF, also known as Version of recordLink to publication

Citation (APA):

Bayramoglu, E., Andersen, N. A., Poulsen, N. K., Andersen, J. C., & Ravn, O. (2009). Mobile Robot Navigationin a Corridor Using Visual Odometry. In Prooceedings of the 14th International Conference on AdvancedRobotics. (pp. 58). IEEE.

General rights

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ?

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

MobileRobotNavigationinaCorridorUsingVisualOdometry

EnisBayramog˘lu∗,NilsAxelAndersen∗,NielsKjølstadPoulsen†,

JensChristianAndersen∗andOleRavn∗

∗Department

ofElectricalEngineering,AutomationandControlGroup,TechnicalUniversityofDenmark,Lyngby,Denmark

Emails:{eba,naa,jca,or}@elektro.dtu.dk

†DepartmentofInformaticsandMathematicalModelling.TechnicalUniversityofDenmark,Lyngby,Denmark

Email:nkp@imm.dtu.dk

robotAbstractgenerationlocalization—Incorporationitsoflocalizationisstudiedofcomputervisionintomobileinformationinthiswork.fromrawItincludestheisfusionwiththeodometricposeestimation.Theimagestechniqueandawithcorridorthenimplementedenvironment.onAasmallmobilerobotoperatingatextraction.animprovedwayofdiscretizationnewsegmentedisusedHoughtransformclassifyThevanishingpointconceptisthenincorporatedforimagelinetoinvolvinglinestoThefindboththeasiterativewellastoestimatetheorientation.Amethodthevanishingeliminationoftheoutliersisemployedodometryfusiondrivenisbetweenachievedthewithvisionpointanextendedbasedandposethecameraposition.Kalmanestimationfilter.AandtheerrorerrormodelisusedfortheodometrywhileadistancesimpleAnappliedextendedmodelwithKalmanconstantnoiseisassumedforthevision.aresystemincluded.toestimateisillustratedTherobustnessodometryfilterasparameters.aparameterestimatorisalsobyperformingandthesimpleprecisionExperimentalnavigationoftheresultstasks.

entireI.INTRODUCTION

Thefieldofrobotvisionisgainingprominenceasthepos-sibilitiesareexplored.Theimportanceofvisioninhumansascomparedtoallothersensespayscredittothat.Manytasksperformedbyhumans,today,requirevisionandtheirautomationcouldbemadepossibleasthecomputervisionfielddevelops.Themostsignificantadvantageofvisionisitscapabilitytoacquireinformationeveninverycomplexenvironments,withoutinterferingwiththesurroundings.Thismakesitaveryflexiblesense.

Ontheotherhand,mobilerobotics,stillinitsinfancy,isoneareawhereflexibilitytowardstheenvironmentismostdesired.Themajorchallengeofmobilerobotics,asthenamesuggests,istonavigatetherobottowhereitneedstobe.Inadditiontotheobviousrequirementofprimarymotioncapabilities,suchasturningandmovingbackwardsandforwards,therobothastosenseitspossiblydynamicenvironment,determineitsownlocationandgenerateamotionplanaccordingly.

Theprojectdescribedinthisarticleaimstocontributetobothfieldsbyapplyingcomputervisiontoperformoneofthemostimportanttasksofmobilerobotics,localization.Thescopeofthisprojectincludesthegenerationofthisinformationfromtheimagesaswellasitsfusionwith

wheelencoders.Asmallmobilerobotoperatinginacorridorenvironmentisthenequippedwiththisposeestimatortoperformsimplenavigationtasksasanindicatorofoverallperformance.

Thefieldsofcomputervisionandmobilerobotlocal-izationhavebeenstudiedextensivelytodate.TheworkofKleeman[8]isagoodexampleofmobilerobotlocalizationwithmultiplesensors,namelyodometryandadvancedsonar.Thecameraposeestimation,independentfromroboticsisalsoaveryactiveresearchfield.Yuetal.[16]usedthetrifocaltensorwithpointfeaturestoestimatethepathofthecamerafromasequenceofimagesandMakadia[11]investigatedthecameraposeestimationrestrictedonaplane.Inthesubjectofmobilerobotlocalizationwithvision,Andersenetal.[2]employedmonocularvisiontoassistthelaserscanner.Lin[9]usedstereovisionandMunguiaandGrau[12]studiedmonocularvisiondirectly.

Previousworkinvestigatingproblemssimilartothisprojectshouldalsobenoted;Tada[15]usesmonocularvisioninacorridorandincorporatesthevanishingpointtofollowthecenterwhileShietal.[14]studiesnavigationinacorridorusinglinesbuttheyaremainlyinterestedinasaferegiontotravelinsteadoftheposeandfollowaverydifferentstrategy.Guerraetal.[6]obtainincrementalmeasurementsfromlinesinenvironmentsunknownaprioriandtheyalsocarryoutexperimentsinacorridor.

II.SOLUTIONOUTLINE

Intheassumedsetup,themobilerobothasasinglecameramountedonitwithoutanyabilitytoturnormovew.r.ttherobotitself.Theactivewheelsoftherobotalsohaveencodersavailablefordeadreckoning.Therobotlocalizationisperformedatacorridor,wherethevisionisusedtoestimatetherobotorientationanditspositionacrossthewidthofthecorridor.Deadreckoning,ontheotherhand,isusedtokeeptrackofboththeorientationandthepositionacrossthewidthandthedepthofthecorridor.Thecorridordimensionsareassumedtobeknownapriori.

Thechoiceofthecorridorastheworkingenvironmenthastwoimportantreasons.First,thecorridorisacommonpartofmostdomesticenvironmentsandbeingabletonavigatein

ithaspotentialonitsown.Second,ithasaveryregularandsimplestructure,makingiteasiertostartthedevelopmentofamoregeneralvisionbasedsolution.

Deadreckoningisalwaysappliedtokeeptrackoftheposeestimationwithagrowingerror.Therefore,foreachrawimagetaken,apriorestimateisavailabletoaidthevisualestimation.Initially,therobotiseitherstartedfromaknownlocationorahighuncertaintyintheposeisassumed.

Visualestimationisperformedusingimagelinesasfea-tures.LinesareextractedusingaformofsegmentedHoughtransform.Theselinesarethenclassifiedw.r.tdirectionusinginvariantenvironmentalinformationandthepriorestimate.Thevanishingpointcorrespondingtothedirectionalongthecorridorisalsocalculatedrobustly.Thelinesarethenmatchedtoeachofthefourlineslyingalongthecornersofthecorridors.Finally,thevanishingpointisusedtoestimatetheorientationwhilethelinematchesareusedtoestimatethetranslation.

WhenthevisualestimateisavailableitischeckedforconsistencyusingthepriorestimateandBayesianhypothesistesting.Ifitpassesthecheck,itisfusedwiththedeadreckoningusinganextendedKalmanfilter(EKF).ThemodelforthedeadreckoningismodifiedtoallowfortheestimationofitsparametersalongwiththeposeitselfresultinginanextendedKalmanfilterasaparameterestimator(EKFPE).Theprocessingofeachimagespansafewsamplingperi-odsofthewheelencoders.Duringthistime,deadreckoningiscontinuedandalltheencoderoutputisrecordedatthesametime.Whenthevisualestimateisavailable,itisusedtorefinetheestimateatthetimeofthetakingoftheimageandtheestimateforthecurrenttimeisrecalculatedusingtherecordedwheelencoderoutput.

Thenextsectiondescribesthevisionbasedpartoftheposeestimationincludinglineextraction,linematchingandposeestimationsteps.SectionIVisconcernedwiththefusionofthevisionbasedestimationwiththeodometry.SectionVpresentstheresultsofthework.

III.VISION

A.LineExtraction

Anedgeimageisfirstcreatedtobeusedinthelineextraction.Cannyedgedetectionalgorithmisusedforthispurpose.Thealgorithmisfirstproposedandexplainedin[4].Itisonlyslightlymodifiedbyperformingthenon-maximaledgepixelsuppressionbycomparisonwithverticalandhorizontalneighborsonly,thatis,excludingthediagonalneighbors.Thismodificationresultsinthinnerlines,fasterandmorepreciseforlineextraction.

SegmentedHoughtransformisthenappliedtotheresult-ingedgeimage.Thereadercouldreferto[1]forHoughtransformusedforgeneralfeatureextraction.ThestandardHoughtransform(SHT)usedforlineextractionisexplainedin[5].[7]givesanoverviewofthevariationsofSHT.SegmentedHoughtransformispreferredinthisworkduetoitshigherspeedandrobustness.

Theexactprocedureistofirstsegmenttheedgeimageinto10x10subimages.Houghtransformisthenapplied

tothoseimages.ThissegmentationcanbeshowntospeeduptheoverallHoughtransformproportionaltothesquarerootofthenumberofsegments.Theperformanceisfurtherincreasedbytheextensiveuseoftablelookupsmadepossiblebythesmallsizeofeachsubimage.Thelinesobtainedfromeachsubimagearethentracedacrossthesubimages.

Thealgorithmwithalltheperformanceoptimizationsresultedin15-60timesspeedincreasecomparedtotheOpenCV1implementationofSHT.TheHoughtransformisalsomodifiedtokeeptrackofsupportingpixelsforeachlinethroughtheuseoflook-uptables.Twolinesegmentsarecombinediftheircombinedsupportingpixelsdescribealinepreciselyenoughwithathreshold.Aswellasbeingarobustcriterionforlinecombination,thisprovidestheendpointsoflinesrobustlyandwithoutanyneedforfurtherprocessing.B.VanishingPointDetection

Ifanumberoflinesareparallelinthe3Dscene,theirprojectionontheimageallmeetatasinglepoint.Thispointisthesocalled”vanishingpoint”specifictothedirectionofthoselines.Thevanishingpointisausefultoolbothforthedetectionofthe3Ddirectionsofimageandforthecalculationofthecameraorientationw.r.tthatdirection.Thevanishingpointisexpectedtositonapointwheremanylinesintersectwithallothers,ifthereareenoughsupportinglines.Whenalltheintersectionpointsbetweentheimagelinesarecalculated,adenseclusterissupposedtobeformedaroundthevanishingpoint.Usingtheavailablepriorestimate,itisalsopossibletocalculateanestimateforthevanishingpointalongwithanuncertainty.

Givenadirectionintheworldcoordinatesdescribedbytheunitvector󰀖v,thisvectorisfirsttransformedtothe

imagecoordinateswhereitresultsin󰀖vi.Notethatv󰀖

iwillbeafunctionofthecameraorientationbutnotthecameratranslation.Thecoordinatesofthevanishingpointontheimagewillthenbegivenby:

ixv=vix

vyvi(1)

z,yv=vi

z

Inthisimplementation,firsttheintersectionpointsare

calculated.Theyarethenfilteredusingthevanishingpointestimateobtainedfrom(1).Theactualvanishingpointiscalculatedbyiterativelyremovingthefurthestpointfromthecenterofgravityoftheremainingpoints.Whenthenumberofpointsisreducedtoacertainthreshold,thecenteroftheremainingpointsisusedasthevanishingpoint.Equation(1)isthenusedtocalculatethecameraorientation.Thisschemehasproventobeveryrobust.Foranalternativemethodofvanishingpointdetection,thereadershouldreferto[3].Itisimportanttonotethat(1)providestwoconstraintswhereasthecompletecameraorientationisdescribedbythreeparameters.Inthecaseofthemobilerobot,thecameraisconstrainedtoturnaroundasingleaxis,thereforethosetwoconstraintssuffice.

1An

opencomputervisionlibraryforC/C++,originallydevelopedby

Intel.

C.PoseEstimation

Asthevanishingpointcontainsinformationaboutonlytheorientationofthecamera,imagelinesarealsomatchedtotheknown3Dlinestoobtainconstraintsaboutthecameratranslation.Eachlinecontributesoneconstraintforthepositionalongthewidthofthecorridor.

AssumethatanimagelineisdescribedbytheCartesianlineequationgivenin2.

ax+by=c

(2)

Wherea,bandcarecalculatedduringlineextraction.Thenifthepointp󰀖isapointonthelinein3Dandthevector󰀖visthedirectionoftheline,thetransformationof

thosevectorstotheimagecoordinates(p󰀖i,v󰀖

i)satisfiesthefollowingconstraints:

avi++bviapx

ibpy=cvixizy=cpi

(3)

z

Ideally,thefirstofthoseissimplysatisfiedbycalculatingtheorientationusingtheestimatedvanishingpoint.Thesecondconstraintisthenenoughtosolveforacandidatevaluefortheposition.

Inthiswork,thelinesalongthecorridorcornersareused.Thelinespassingthroughthevanishingpointareclassifiedtobealongthecorridor.Inordertomatchthoselinestoaparticularcorner,imagelinesareinvestigatedfortheirpositionwithrespecttothevanishingpoint.Therobotisknowntobeinsidethecorridor,thereforethoselineslyingbelowthevanishingpointareknowntobeonthefloorandthoselineslyingtoleftofthevanishingpointareknowntobeontheleftwallandviceversa,assumingthatthelinesactuallycorrespondtoanactualcorner.

Thepositionconstraintforeachlineaftermatchingissolvedtoobtainapositioncandidate.Thesameiterativeeliminationofthefurthestelementisagainusedtoarriveataconsistentsetofpositioncandidates,andtheresultisusedasthevisualestimatealongwiththepreviouslycalculatedorientation.

D.ConsistencyCheck

Underrarecircumstancesthevanishingpointisdetectedoveraspuriouscandidatecluster,orbycoincidenceanin-correctsetoflinesalongthecorridorconstitutesaconsistentsetofcorners.Inthesecases,theerrorofthevisualestimateismuchhigherthanwhatisexpectedofaregularestimate.SuchahigherrorcausesabiasattheEKFoutputforalongtimeduetothelowassumeduncertainty.Insteadofsimplyincreasingtheuncertaintyinthevisionerrormodel,atwohypotheseserrorcheckingisusedtorejectsuchfaultyestimates.

Thefirstofthosetwohypothesesisthatthemeasurementisaregular,accurateone.Thishypothesishasthesameerrordistributionastheerrormodelusedinfusion.Thesecondhypothesisisthatthemeasurementisafaultymeasurement.Inthiscasetheerrorisassumedtobemuchlarger.Thesec-ondhypothesisisgivenalowerpriorprobabilitycomparedtothefirst,sincesuchmeasurementsoccurrarely.

Theprobabilityofthefirsthypothesisbeingcorrect,given

theactualmeasurementandthepriorestimate,alongwithitsuncertaintyiscalculated.Ifthisprobabilityisabove95%theestimateisacceptedandusedforfusionasdescribednext.

IV.SENSORFUSION

ExtendedKalmanfilterischosenforthefusionofthetwosourcesofinformationavailableforlocalization.Thischoicerequiresanerrormodeltobedefinedforbothsources.A.ErrorModels

Asimpleexperimentallytunedandvalidatederrormodelisdefinedforthevisualinformation.Accordingtothismodel,theorientationismeasuredwithanindependentaddi-tiveGaussianerrorhavingzeromeanand0.01radstandarddeviation.ThetranslationmeasurementisalsoassumedtohaveanindependentadditiveGaussianerrorwithzeromeanand3cmstandarddeviation.

Theerrormodelchosenforthedeadreckoningisde-scribedin[8].Thismodelcouldbesummarizedasfollows;thedeadreckoningequationisgivenin4.Inthisequationθistheorientationandx,yarethepositionestimates.llrandlaretheleftandrightwheeltraveleddistanceswhileBistheeffectiveseparationbetweenthewheels.⎡

⎣θlθ(k)+

lr(k)−ll(k)

x(k(k+⎤⎡

+1)

1)⎦=⎢r(k)+ll(k)

B

lr(k)y(k+1)⎣x(k)+y(k)+l(k)+lcos(θ(k)+

r2

2−ll(k))⎥⎦2

l(k)

sin(θ(k)+

lr(k)2−B

B

ll(k)

)(4)

Thesourcesoferrorinthisestimateupdateequationareassumedtobeduetouncertaintyonluncertaintiesarefurtherassumedtobeindependentl,lrandB.TheGaussianwithzeromeanandvariancesproportionaltothetraveleddistance.Thismodelensuresthattheresultinguncertaintyforapathisindependentofthenumberofsamplestakenduringit.

B.FusionUsingEKF

TheKalmanfilterisanoptimalstateestimatorforafinitelinearsystemwiththeinitialstateestimatesanderrorsourcesjointlyGaussianinnature.ExtendedKalmanfilterisanextensionofthisfiltertonon-linearsystemsbylinearizingthestatespacedescriptionlocally.Anindepthanalysisofbothcouldbefoundin[13].

ThedeadreckoningisusedintheKalmanfilterinplaceofthesystemstatetransitionequation,althoughitisalsoameasurement.Thisapproachisfollowedbymanyauthorsanditcanbevalidatedmathematically.Thevisualestimationentersthefilterasadirectmeasurementoftheorientationandtheycoordinate.

ItisimportanttonotethattheresultingfilterisslightlydifferentfromaconventionalEKFasthemeasurementisnotappliedateverystatetransitionsample.Instead,itisappliedwheneveravisualestimateisavailable.

C.ModifyingtheSystemforEKFPE

AnextendedKalmanfilterasaparameterestimator(EKFPE),asthenamesuggests,isamodificationoftheEKFsothatitestimatessomeofthesystemparametersalongwiththestates.Thisisachievedbyaugmentingthestatevectorwiththeparameterstobeestimated.Thestatetransitionequationisalsoaugmentedwiththesenewstatesthatdonotchangeoversamples.Aninitialuncertaintyisalsoassumedoverthesestatestoallowforestimation.EKFPEisanalyzedindetailin[10].

Theodometryequationin(4)usedasthestatetransitionequationismodifiedasfollows:⎡

θ(k)+

lr(k)−ll(k)

⎢θ(k+1)

⎤⎡

⎢lB(k)

r(k)+ll(k)

⎢xcos(θ(k)+lr(k⎢⎢y((kk++1)⎥1)⎥⎢⎥⎢⎢x(k)+l(k)+2

rk⎥⎢y(k)+r(k+1)2

ll(k)

sin(θ(k)+

l2B)−ll(k)⎥k2B)−(k(l))⎥r(kkl)

(k)⎢)⎥⎥⎥⎣⎥=⎢k(k+1)⎥⎦⎢l⎢r(k)⎥B(k+1)⎣kl(k)⎥⎦

B(k)

(5)

Here,thenewvariableskfortheleftrandkandrightlarethetraveleddistanceperencodertickwheels.Averysmallsourceoferrorforthesestatesinthestatetransitioncouldalsobeaddedtoallowforslowlyvaryingparametersinthelongrun.

V.RESULTS

A.VisualEstimation

ThestepsofvisualposeestimationareshowninFig.1.Fig.1ashowstherawimagetakenbythemountedcamera.TheedgemapobtainedusingCannyedgedetectorisdisplayedinFig.1b.TheresultofthelineextractionisshowninFig.1c,wheretheextractedlinesaredrawninred.Fig.1dillustratesthecalculationofthevanishingpoint.Thetinyblue’x’marksarewheretheextractedlinesintersect.Atthecenterofthatimageadenseclusterofthosemarksarevisible.Thelargergreen’x’markatthecenterofthatclusteristhedetectedvanishingpoint.

Fig.1edisplaystheendresultoftheprocessing.Thecyanlinesaretheonesclassifiedtobealongthecorridor.

Thevisualestimateforthiscaseisfoundtobe20.8cmtotherightofthecorridorcenter.Theactualpositionismeasuredwitharulerandfoundtobe22cminthesamedirection.

Theerrorintheorientationismuchhardertomeasuresinceitisknowntobeverysmallaslongasawrongclusterisnotfound.Fortunately,thisrarelyhappensduetothepreprocessingappliedbeforevanishingpointdetection.Whenithappens,theestimatefailstheconsistencycheckwiththepriorestimateanditisnotused.Inaregularimagewherethevanishingpointisdetectedinsidetherightcluster,thedeviationcouldbeassumedtobebelow2pixels,whichtranslatestoroughly0.3degrees.

TherobotplatformconsistsofaCPUrunningwith500MHzto1200MHzclockspeeddependingontheage

(a)Originalimage(b)Edgeimage

(c)LineExtraction(d)VanishingPointDetection

(e)CompletelyAnalyzedImage

Fig.1:Visualizationofthestepstakenforvisualposeestimation

oftherobot.Theentirevisualestimationtakesfrom50msto100msforcompletion,dependingontheparticularrobotandtheimage.

AfewotherexamplecasesareshowninFig.2.Theyillustratedifferentcasespossiblyencounteredduringopera-tion.ThevisualestimateworkswellinFigs.2a,2band2cwithpositionerrorsof2.5cm,0.2cmand0.9cmrespectively,despitedifferentchallenges.Fig.2disarareexample,wherealthoughtheimageiswellsuitedforestimationandthevanishingpointiscorrectlyfound,theerrorisashighas11error.8cm.modelTheseforrarethevisualcasesareestimateactuallyassumesthereason3cmwhystandardthedeviationforthepositionerroralthoughtheerrorislowerusually.

B.SimpleNavigationTasks

Theperformanceoftheentiresystemisevaluatedbyperformingsimplenavigationtasksusingthelocalizationdescribedinthisarticle.Thefirsttaskistodrivebackandforth10mfortwocompleteturns.Fig.3containsrelevantinformationforthistask.Fig.3aplotsthevisualestimates(greendots),theoverallestimate(redline)andtheactual

(a)Partiallyoccluded(b)Toofewlines

(c)Lookingaway(d)Acasewithhighpositionerror

Fig.2:Variouscasesforvisualposeestimation;thesearein-tendedtobeinterestingcasesencounteredduringoperation.Theestimationerrorsforimagesa,bandcareallbelow3cmdespitevariousdifficulties.Ontheotherhand,theerrorforimagedis11.8cmalthoughtherearenodisruptingfactors,arareoccurrence.

rulermeasurements(blueline)together.Fig.3bdisplaysthedeadreckoningaloneforthesametask.Notehowtheactualerrorstayswithin±3cmwhiledeadreckoningalonequicklylosestrackofthepose.

Fig.4displaystheresultsoftwomoretasks.ThetaskgiveninFig.4aistomoveonazigzagshapedpath.ThenexttaskgiveninFig.4bistodrivestraight,butduringtheexecutionofthistaskthecameraisblindedalongthepathsegmentbetween6mand13m.Theblackdotsinthefigureillustratethesampleswherevisualestimatesarediscardedautomatically.Notehowtherobotdivergesfromitspathwhileitisblindedandalsohowitquicklyincorporatestheimagesoncethevisionisback(watchtheredline).

Finally,itisimportanttoobservetheevolutionofthedeadreckoningparameters.Fig.5isrecordedduringalongzig-zagtypenavigationtask.Thetaskspans2500visualestimatesandthedeadreckoningparametersareestimatedateachsample.ThefigureshowstheplotofestimatedB,theeffectivewheelseparationandtheestimatedR,whichistheratioofthetraveleddistancesperencodertickfortheleftandrightwheels.Inthisexperimenttheinitialvaluesaredeliberatelysettowrongvalues(Bto0.22cminsteadof0very.27cmquickly,Rtowhile1.1insteadBtakesof0longer.

.99).ObservehowRconvergesVI.CONCLUSION

Avisualodometrymethodformobilerobotlocalizationispresentedhere.Themethodreliesonstraightlinesalongthecorridor,whosewidthandheightareassumedtobeknownapriori.Imagelinesarethenrobustlymatchedtothesetoobtainposeconstraints.TheposeinformationisthenfusedwithdeadreckoningusinganextendedKalmanfilter.

Thepotentialbehindrobotvisionisgenerallyacknowl-edgedduetoboththelowcostofcommerciallyavailable

0.08combinedvisual0.06ruler0.040.02)m0(noitis−0.02op y−0.04−0.06−0.08−0.1−0.12 012345678x position(m)(a)Positiondatafromvarioussourcesforthefirstexperiment4)m2(noitis0op y−2−40510x position(m)(b)Thepathrecordedbyodometryonly

Fig.3:Localizationdatafordrivingstraight.Notehowtheodometryaccumulatesmorethan1merrorforapathsegmentshorterthan10m.Thefusedestimatemaintainslessthan3cmerrorindependentofdistancetravelled.

cameras,theflexibilityofusingvisionandthehighinfor-mationcontentinimages.Thisworkinvolvesthesuccessfulapplicationofvisionforrobustandrealtimeestimationofposeonamoderatesystem.Furthermore,themeasuredprocessingtimeallowstheuseofslowersystemsrunningwithclockspeedsdowntoroughly200MHz.

Thelineextractionstepisthebackboneofthemethodfollowedinthiswork,becausethelinesaretheprimarysourcesofinformation.Thisstepisalsothemosttimecon-sumingone.Thereforeanewfastlineextractionalgorithmisdevelopedwhichisdemonstratedtoberobustindetectingeventheshorterlines.

Thevanishingpointdetectionandlinematchingpartsbothemploytheiterativefurthestpointremoval.Thisensuresthattheyarehighlytoleranttooutliers.Furthermore,theiterativefurthestpointremovalisimplementedusingaquadtreedatastructuresothattheprocessingtimescaleswellwiththenumberofvanishingpointcandidates.

Visualestimationanddeadreckoningbothrelyontheresultsofeachother.However,forrobustness,bothofthemaredesignedtobetoleranttomoreerrorthantheotheronegenerates.Anoveralldemonstrationofthisisprovidedintheresultssectionwheretherobotisblindedforaslongasa6mpathsegment.Duringthissegmentitmanagestostayonitstrackwithinanacceptableerrorbound.Whentheblindingisremovedittakeslessthanhalfametertorelocateitself.On

0.3 combinedruler0.2visual0.1)m(noitis0op y−0.1−0.2 0510152025x position(m)(a)Positiondataforazig-zagshapeddrive0.15 combineddiscarded0.1visualruler0.05)m(n0oitisop −0.05y−0.1−0.15−0.20 51015202530x position(m)(b)Positiondataforthepartiallyblindeddrive

Fig.4:Additionalnavigationtasks

theotherhand,visionisalsodemonstratedtosuccessfullycompensatefortheerrorsinthedeadreckoningparametersaswellasestimatingthem.

Theaccuracyofthesystemisalsodemonstratedthroughexperiments.Themeasuredpositionerrorismaintainedbelow3cmandtheorientationerrorisestimatedtobebelow1degree.

VII.FUTUREWORK

Thedevelopedmethodpresentsmanyopportunitiesinconjunctionwithothersystems.OthersensorscouldbesmoothlyintegratedthroughtheExtendedKalmanFilter.Themethoditselfcouldalsobeinsertedinothervisionsystemswherevanishingpointsareoccasionallyavailable.Thiswouldprovidethehostsystemwithanindependentsourceofmeasurement.Althoughitisdevelopedforastraightcorridor,itcouldbeusedinmorecomplexcorridorenvironmentsprovidedthatthecornersandjunctionsarehandledwithauxiliaryalgorithms.

Ongoingresearchaimstoextendtheproposedsolutiontoincorporateagenericsetoflinesthatarenotrestrictedtobeparallel.Theuseofmultiplecamerasinconjunctionisalsoinvestigated.Themotivationbehindtheextensionistoenablelocalizationbasedonthismethodinlessregular

0.280.26X: 2500)Y: 0.2709m(B0.240.2205001000150020002500Sample number1.1R1.05X: 25001Y: 0.990505001000150020002500Sample numberFig.5:Theevolutionoftheodometryparametersenvironmentswithavailablelines.

REFERENCES

[1]A.S.Aguado,M.E.Montiel,andM.S.Nixon.Arbitraryshapehough

transformbyinvariantgeometricfeatures.IEEEInternationalConfer-enceonSystems,Man,andCybernetics.ComputationalCyberneticsandSimulation,3:2661–2665,1997.

[2]JensChristianAndersen,NilsAxelAndersen,andOleRavn.Vision

assistedlaserscannernavigationforautonomousrobots.InProceed-ingsofthe10thInternationalSymposiumonExperimentalRobotics2006(ISER’06);39,pages111–120.Springer-VerlagBerlin,2008.Presentedat:10thInternationalSymposiumonExperimentalRobotics2006(ISER’06):ISER’06,2006.

[3]S.T.Barnard.Interpretingperspectiveimages.ArtificialIntelligence,

21:435–62,1983.

[4]J.Canny.Acomputationalapproachtoedgedetection.IEEE

TransactionsonPatternAnalysisandMachineIntelligence,8:679–98,1986.

[5]RichardO.DudaandPeterE.Hart.Useofthehoughtransformation

todetectlinesandcurvesinpictures.Commun.ACM,15(1):11–15,1972.

[6]JJGuerreroandCSagues.Uncalibratedvisionbasedonlinesfor

robotnavigation.MECHATRONICS,11(6):759–777,SEP2001.

[7]J.IllingworthandJ.Kittler.Asurveyofthehoughtransform.

ComputerVision,Graphics,andImageProcessing,44:87–116,1988.[8]L.Kleeman.Advancedsonarandodometryerrormodelingfor

simultaneouslocalisationandmapbuilding.IEEE/RSJInternationalConferenceonIntelligentRobotsandSystems,1:699–704,2003.[9]Huei-YungLinandJen-HungLin.Avisualpositioningsystemfor

vehicleormobilerobotnavigation.IEICETransactionsonInformationandSystems,E89-D:2109–16,2006.

[10]L.Ljung.Asymptoticbehavioroftheextendedkalmanfilteras

aparameterestimatorforlinearsystems.IEEETransactionsonAutomaticControl,24:35–50,1979.

[11]AmeeshMakadia,DinkarGupta,andKostasDaniilidis.Planarego-motionwithoutcorrespondences.Proceedings-IEEEWorkshoponMotionandVideoComputing,MOTION2005,pages160–165,2007.[12]R.MunguiaandA.Grau.Monocularslamforvisualodometry.IEEE

InternationalSymposiumonIntelligentSignalProcessing,pages1–6,2007.

[13]MariaIsabelRibeiro.Kalmanandextendedkalmanfilters,2004.

[14]omni.isr.ist.utl.pt/WenxiaShiandJ.Samarabandu.∼mir/pub/kalman.pdfCorridorlinedetection.

forvision

basedindoorrobotnavigation.CanadianConferenceonElectricalandComputerEngineering,pages1988–1991,2006.

[15]N.Tada,T.Saitoh,andR.Konishi.Mobilerobotnavigationbycenter

followingusingmonocularvision.SICEAnnualConference,pages331–335,2007.

[16]YingKinYu,KinHongWong,M.M.Y.Chang,andSiuHangOr.

Recursivecamera-motionestimationwiththetrifocaltensor.IEEETransactionsonSystems,Man,andCyberneticsPartB:Cybernetic,36,2006.

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- awee.cn 版权所有

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务