SlideShare a Scribd company logo
Visual odometry(2)
                          23rd/May/2011




                                          Cyphy Lab.
                                          Inkyu Sa
Monday,	 23	 May
Contents
                     • Paper name, authors, publication.
                     • Abstract
                     • Visual odometry
                     • Experiments and results
                     • My thought
                     • Current status
Monday,	 23	 May
Paper name

                     •   “A Visual Odometry Method Based on the
                         SwissRanger SR4000”

                     •   Cang Ye* and Michael Bruch†,
                         * University of Arkansas at Little Rock, 2801 S. University Ave, Little Rock, AR, USA 72204
                         † Space and Naval Warfare Systems Center Pacific, 53560 Hull Street, San Diego, CA 92152


                     •   Unmanned Systems Technology XII, Proc. of SPIE Vol. 7692, 76921I. in 2010.




Monday,	 23	 May
Abstract
                 This paper presents a pose estimation method based on a 3D camera⎯the SwissRanger Input
                 SR4000. The proposed method estimates the camera’s ego-motion by using intensity and
                                                                        a
                 range data produced by the camera. It detects the SIFT (Scale-Invariant Feature
                                                                b
                 Transform) features in one intensity image and match them to that in the next intensity
  Output                                                                        c
                 image. The resulting 3D data point pairs are used to compute the least-square rotation and
                 translation matrices, from which the attitude and position changes between the two image
                 frames are determined. The method uses feature descriptors to perform feature matching.
                                    1
                 It works well with large image motion between two frames without the need of spatial
                 correlation search. Due to the SR4000’s consistent accuracy in depth measurement, the
                                2
                 proposed method may achieve a better pose estimation accuracy than a stereo vision-based
                 approach. Another advantage of the proposed method is that the range data of the SR4000
                                                          3
                 is complete and therefore can be used for obstacle avoidance/negotiation. This makes it
                 possible to navigate a mobile robot by using a single perception sensor. In this paper, we
                 will validate the idea of the pose estimation method and characterize the method’s pose
  Results
                 estimation performance.

Monday,	 23	 May
Visual odometry(1)
   SIFT feature detection and descriptor




                                                     Remove outliers using RANSAC
                       Descriptors matching
      SIFT feature detection    Unfortunately, the matching method is not presented in this paper.
                                I guess that it could be either “L1” or “L2” matching of descriptors.

Monday,	 23	 May
Visual odometry(2)
        Descriptors matching
                         t-1          100ms              t
                               1                                 a
                                                                     b
                                                                     c

                                                                         d
                                                                               Source from wikipedia
                                                                                   L2 distance
                                                                                   L1 distance
                     A

      1. Calculate Euclidian distance(L2) or Manhattan distance(L1) between a feature 1
      and features a,b,c,d ...
                                             
      2. Find the minimum value e = min |xi − xj |       2 + |y − y |,
                                                               i   j
                                                                     2 Complexity= O(n2 )
      where i and j note features in t-1 and t frame respectively.

       This method often produces mismatched pairs.

Monday,	 23	 May
Visual odometry(3)
   SIFT matching
                                  t-1              100ms                t




    My Idea
  1. Select a feature in t-1 frame and calculate L2 distance for all features in t frame.
  2. Our model is e and  is threshold of e.
  3. If the value of e   , put i, j and e into vector V.
  4. Repeat 1 to 3 for a certain number of times or until exhausting all features in t-1 frame.
  5. Find the lowest value in V.
  6. If e value is lower than the threshold, i and j are inlier and others are outliers.
Monday,	 23	 May
Visual odometry(4)
   Finding R,T
      Prerequisite
      2D image plane and 3D depth data are correspond each other.
      Therefore we knew the intensity and depth of a certain pixel.


                       2D image plane                                 3D depth data




Monday,	 23	 May
Visual odometry(5)
   Finding R,T
                              
      3D data sets pi and pi i = 1, ..., N N is the number of matched SIFT features.

      Our objective is obtaining R,T when e has the minimum value.
                                                     i=N
                                                            
                                               2
                                              e =          pi − Rpi − T 2            (1)
                                                     i=1

    Algorithm
                        
      1. Find pi and pi using SIFT and matcher.
      2. Randomly select 4 associated points from the two data sets.
                                                                                    
                                                                 ˆ ˆ
      3. Find the least-square rotation and translation matrices R , T f or pi and pi using SVD.
      4. Find e using Equation(1).
                                     ˆ ˆ
      5. If the value of e   , put R , T into vector E.
      6. Repeat 2 to 5 for a certain number of times or until exhausting all combination of point set selections
                                          ˆ ˆ
      7. Find the lowest value e with R , T in E.


Monday,	 23	 May
Visual odometry(6)
   Finding R,T
                                                                  ˆ , T f or pi and p using SVD.
       3. Find the least-square rotation and translation matrices R ˆ                i
                                                                
            3-1. Compute the centroid p and pʹ′ of {pm } and {pm }
                                                                                
                                          qm = pm − p and qm = pm − p

            3-2. Compute the 3×3 matrix
                                                      M
                                                                   
                                                                t
                                                 Ω=         qm qm
                                                      m=1

            3-3. Find the SVD of Ω
                                                 Ω = U ΛV t
                                                 where U and V are 3 × 3 orthogonal matrices
                                                      Λ = diag(λ1 , λ2 , λ3 ) is a diagonal matrix
                                                                        with non − negative elements.


            3-4. Calculate det(U)

Monday,	 23	 May
Visual odometry(7)
   Finding R,T
           3-4. Calculate det(U)
              If det(U) = 1    ˆ
                               R = V Ut
              If det(U) = -1   ˆ = V Ut
                               R
             Otherwise skip the current frame and use the next frame.
             This could be happen when measurement of sensor has large noise.
           3-5. Calculate T
                                           ˆ = p − Rp
                                           T        ˆ

     Determination of 6D pose change
      φ =atan2(r13 cos ψ + r23 sin ψ, r11 cos ψ + r21 sin ψ)                    ˆ
                                                               X=x component of T
      ψ =atan2(r22 , −r11 )                                                     ˆ
                                                               Y=y component of T
       θ =atan2(r32 , −r12 sin ψ + r22 cos ψ)                                   ˆ
                                                               Z=z component of T
       where φ, ψ, θ note roll,yaw and pitch respectively.


Monday,	 23	 May
Experiment(1)




                The SR4000 and the Packbot robot with the sensor installed



Monday,	 23	 May
Experiment(2)




                      Specification of SR4000



Monday,	 23	 May
Experiment(3)




                     Characteristic of SR4000 and Bumblebee2

Monday,	 23	 May
Experiment(4)


                     Original image   Bumblebee2    SR4000
                                      depth image   depth image




          Characteristic of SR4000 and Bumblebee2        Kinect 11bit
                                                         depth image
Monday,	 23	 May
Experiment(5)



  The      graph below was obtained by
  pointing the Kinect at a planar surface,
  fitting a plane (using RANSAC) through
  the pointcloud, and checking the distance
  of the points in the pointcloud to that
  plane.
  source from: ROS kinect.

                                        SR4000 VS Kinect
Monday,	 23	 May
Experiment(6)
                                           Specification of the
                                            Modified Kinect
                                            Range        50cm~5m
                                           Accuracy     10mm~15mm
                                     Pixel array size     640x480
                                     ,Programmable        320x240
                                      Field of view     57(h)x47(v)
                                       Frequency           30Hz
                                           Weight          200g
   Price: $9095                        Dimension        35x25x128mm
                                            Price         180AUD


                        SR4000 VS Kinect
Monday,	 23	 May
Experiment(7)
     Pose error distribution


           1. Zero movement
                     Measurement error
                          (φ, θ, ψ, x, y, z) = (0.1◦ , 0.2◦ , 0.2◦ , 7mm, 3mm, 6mm)
                          Because of sensor measurement noise.(White noise)
                          and mean, standard deviation is very good.



          2. Rotation and translation movement
                Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm)

                mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm)
                standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm)



Monday,	 23	 May
Experiment(8)
     Rotation error distribution




             Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm)
             mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm)
             standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm)
Monday,	 23	 May
Translation error distribution
                               Experiment(9)




             Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm)
             mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm)
             standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm)
Monday,	 23	 May
Experiment(10)
              Rotation experiments and results
 Increase pitch angle
                        Increase yaw angle




Monday,	 23	 May
Experiment(11)
   Translation experiments and results
           x position
                 y position




Monday,	 23	 May
My though
                     1 Is this approach capable of real-time processing?
                     2 How accurate and robust is this proposed
                     method in dynamic sensing environments such as
                     on Quadrotor of on the unstructured road?

                     3 Kinect shows more noisy depth measurement than SR4000.
                     Which means det(Y) can’t be 1 or -1 and fail to calculate R,T.
                     How can we compensate this frame drop problem?




Monday,	 23	 May
Current status
                     Done
                     Future works


       1. Retrieve RGB image and PointCloud using Openni Kinect driver in ROS.
       2. Feature detection using gpusurf from University of Toronto, ASRL
       3. Feature matching using gpu L2 matching and cross-check using OpenCV.
       4. Obtain depth information of each pixel using PointCloud in ROS.

       5. Calculate R,T using the proposed algorithm in this paper.
       6. Compute 6D pose of camera every frame.


       “The method proposed in this paper might not be satisfied with our
       demand. Then figure out problems and solve them.”


Monday,	 23	 May
Thank you.



Monday,	 23	 May

More Related Content

PDF
Er24902905
PDF
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...
PDF
Face Recognition Using Sign Only Correlation
PPT
Nanotechnology
PDF
Lesson 25: Evaluating Definite Integrals (handout)
PDF
影響伝播モデルIDMによるクチコミの感度分析
PDF
Mapping the discrete logarithm
PDF
Fuzzy directed divergence and image segmentation
Er24902905
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...
Face Recognition Using Sign Only Correlation
Nanotechnology
Lesson 25: Evaluating Definite Integrals (handout)
影響伝播モデルIDMによるクチコミの感度分析
Mapping the discrete logarithm
Fuzzy directed divergence and image segmentation

What's hot (20)

PDF
Lesson 26: The Fundamental Theorem of Calculus (handout)
PPT
CS 354 Ray Casting & Tracing
PPT
CS 354 Acceleration Structures
PDF
Higher dimensional image analysis using brunn minkowski theorem, convexity an...
PDF
C O M P U T E R G R A P H I C S J N T U M O D E L P A P E R{Www
PDF
AI Lesson 06
PPT
Java cơ bản java co ban
PPT
CS 354 Final Exam Review
PDF
CSMR11b.ppt
PDF
An Unorthodox View on Memetic Algorithms
PDF
PDF
Jr3516691672
PDF
4213ijaia05
PPT
CS 354 Typography
PDF
10.1.1.11.1180
PDF
Index Determination in DAEs using the Library indexdet and the ADOL-C Package...
PDF
Aj2418721874
PDF
MATHEON D-Day: Numerical simulation of integrated circuits for future chip ge...
PDF
Seminary of numerical analysis 2010
PDF
Conference ECT 2010
Lesson 26: The Fundamental Theorem of Calculus (handout)
CS 354 Ray Casting & Tracing
CS 354 Acceleration Structures
Higher dimensional image analysis using brunn minkowski theorem, convexity an...
C O M P U T E R G R A P H I C S J N T U M O D E L P A P E R{Www
AI Lesson 06
Java cơ bản java co ban
CS 354 Final Exam Review
CSMR11b.ppt
An Unorthodox View on Memetic Algorithms
Jr3516691672
4213ijaia05
CS 354 Typography
10.1.1.11.1180
Index Determination in DAEs using the Library indexdet and the ADOL-C Package...
Aj2418721874
MATHEON D-Day: Numerical simulation of integrated circuits for future chip ge...
Seminary of numerical analysis 2010
Conference ECT 2010
Ad

Similar to Visual Odomtery(2) (20)

PPTX
Kccsi 2012 a real-time robust object tracking-v2
PDF
An Assessment of Image Matching Algorithms in Depth Estimation
PDF
New approach to calculating the fundamental matrix
PDF
ETHZ CV2012: Tutorial openCV
PDF
Build Your Own 3D Scanner: Course Notes
PDF
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
PDF
Triangulation methods Mihaylova
PDF
Lecture 02 internet video search
PPTX
On the Impact of the Error Measure Selection in Evaluating Disparity Maps
PPTX
On the Impact of the Error Measure Selection in Evaluating Disparity Maps
PDF
Lecture11
PDF
Lecture32
PDF
Test
PDF
3-D Visual Reconstruction: A System Perspective
PDF
Dense Visual Odometry Using Genetic Algorithm
ZIP
ICPRAM 2012
PDF
An Evaluation Methodology for Stereo Correspondence Algorithms
PPTX
An Evaluation Methodology for Stereo Correspondence Algorithms
PDF
An Approach for Estimating the Fundamental Matrix by Barragan
PDF
Structure and Motion - 3D Reconstruction of Cameras and Structure
Kccsi 2012 a real-time robust object tracking-v2
An Assessment of Image Matching Algorithms in Depth Estimation
New approach to calculating the fundamental matrix
ETHZ CV2012: Tutorial openCV
Build Your Own 3D Scanner: Course Notes
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
Triangulation methods Mihaylova
Lecture 02 internet video search
On the Impact of the Error Measure Selection in Evaluating Disparity Maps
On the Impact of the Error Measure Selection in Evaluating Disparity Maps
Lecture11
Lecture32
Test
3-D Visual Reconstruction: A System Perspective
Dense Visual Odometry Using Genetic Algorithm
ICPRAM 2012
An Evaluation Methodology for Stereo Correspondence Algorithms
An Evaluation Methodology for Stereo Correspondence Algorithms
An Approach for Estimating the Fundamental Matrix by Barragan
Structure and Motion - 3D Reconstruction of Cameras and Structure
Ad

Recently uploaded (20)

PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Machine learning based COVID-19 study performance prediction
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
1. Introduction to Computer Programming.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Getting Started with Data Integration: FME Form 101
PDF
Empathic Computing: Creating Shared Understanding
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Assigned Numbers - 2025 - Bluetooth® Document
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Machine learning based COVID-19 study performance prediction
“AI and Expert System Decision Support & Business Intelligence Systems”
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
1. Introduction to Computer Programming.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Dropbox Q2 2025 Financial Results & Investor Presentation
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Big Data Technologies - Introduction.pptx
The Rise and Fall of 3GPP – Time for a Sabbatical?
Spectral efficient network and resource selection model in 5G networks
Getting Started with Data Integration: FME Form 101
Empathic Computing: Creating Shared Understanding
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx

Visual Odomtery(2)

  • 1. Visual odometry(2) 23rd/May/2011 Cyphy Lab. Inkyu Sa Monday, 23 May
  • 2. Contents • Paper name, authors, publication. • Abstract • Visual odometry • Experiments and results • My thought • Current status Monday, 23 May
  • 3. Paper name • “A Visual Odometry Method Based on the SwissRanger SR4000” • Cang Ye* and Michael Bruch†, * University of Arkansas at Little Rock, 2801 S. University Ave, Little Rock, AR, USA 72204 † Space and Naval Warfare Systems Center Pacific, 53560 Hull Street, San Diego, CA 92152 • Unmanned Systems Technology XII, Proc. of SPIE Vol. 7692, 76921I. in 2010. Monday, 23 May
  • 4. Abstract This paper presents a pose estimation method based on a 3D camera⎯the SwissRanger Input SR4000. The proposed method estimates the camera’s ego-motion by using intensity and a range data produced by the camera. It detects the SIFT (Scale-Invariant Feature b Transform) features in one intensity image and match them to that in the next intensity Output c image. The resulting 3D data point pairs are used to compute the least-square rotation and translation matrices, from which the attitude and position changes between the two image frames are determined. The method uses feature descriptors to perform feature matching. 1 It works well with large image motion between two frames without the need of spatial correlation search. Due to the SR4000’s consistent accuracy in depth measurement, the 2 proposed method may achieve a better pose estimation accuracy than a stereo vision-based approach. Another advantage of the proposed method is that the range data of the SR4000 3 is complete and therefore can be used for obstacle avoidance/negotiation. This makes it possible to navigate a mobile robot by using a single perception sensor. In this paper, we will validate the idea of the pose estimation method and characterize the method’s pose Results estimation performance. Monday, 23 May
  • 5. Visual odometry(1) SIFT feature detection and descriptor Remove outliers using RANSAC Descriptors matching SIFT feature detection Unfortunately, the matching method is not presented in this paper. I guess that it could be either “L1” or “L2” matching of descriptors. Monday, 23 May
  • 6. Visual odometry(2) Descriptors matching t-1 100ms t 1 a b c d Source from wikipedia L2 distance L1 distance A 1. Calculate Euclidian distance(L2) or Manhattan distance(L1) between a feature 1 and features a,b,c,d ... 2. Find the minimum value e = min |xi − xj | 2 + |y − y |, i j 2 Complexity= O(n2 ) where i and j note features in t-1 and t frame respectively. This method often produces mismatched pairs. Monday, 23 May
  • 7. Visual odometry(3) SIFT matching t-1 100ms t My Idea 1. Select a feature in t-1 frame and calculate L2 distance for all features in t frame. 2. Our model is e and is threshold of e. 3. If the value of e , put i, j and e into vector V. 4. Repeat 1 to 3 for a certain number of times or until exhausting all features in t-1 frame. 5. Find the lowest value in V. 6. If e value is lower than the threshold, i and j are inlier and others are outliers. Monday, 23 May
  • 8. Visual odometry(4) Finding R,T Prerequisite 2D image plane and 3D depth data are correspond each other. Therefore we knew the intensity and depth of a certain pixel. 2D image plane 3D depth data Monday, 23 May
  • 9. Visual odometry(5) Finding R,T 3D data sets pi and pi i = 1, ..., N N is the number of matched SIFT features. Our objective is obtaining R,T when e has the minimum value. i=N 2 e = pi − Rpi − T 2 (1) i=1 Algorithm 1. Find pi and pi using SIFT and matcher. 2. Randomly select 4 associated points from the two data sets. ˆ ˆ 3. Find the least-square rotation and translation matrices R , T f or pi and pi using SVD. 4. Find e using Equation(1). ˆ ˆ 5. If the value of e , put R , T into vector E. 6. Repeat 2 to 5 for a certain number of times or until exhausting all combination of point set selections ˆ ˆ 7. Find the lowest value e with R , T in E. Monday, 23 May
  • 10. Visual odometry(6) Finding R,T ˆ , T f or pi and p using SVD. 3. Find the least-square rotation and translation matrices R ˆ i 3-1. Compute the centroid p and pʹ′ of {pm } and {pm } qm = pm − p and qm = pm − p 3-2. Compute the 3×3 matrix M t Ω= qm qm m=1 3-3. Find the SVD of Ω Ω = U ΛV t where U and V are 3 × 3 orthogonal matrices Λ = diag(λ1 , λ2 , λ3 ) is a diagonal matrix with non − negative elements. 3-4. Calculate det(U) Monday, 23 May
  • 11. Visual odometry(7) Finding R,T 3-4. Calculate det(U) If det(U) = 1 ˆ R = V Ut If det(U) = -1 ˆ = V Ut R Otherwise skip the current frame and use the next frame. This could be happen when measurement of sensor has large noise. 3-5. Calculate T ˆ = p − Rp T ˆ Determination of 6D pose change φ =atan2(r13 cos ψ + r23 sin ψ, r11 cos ψ + r21 sin ψ) ˆ X=x component of T ψ =atan2(r22 , −r11 ) ˆ Y=y component of T θ =atan2(r32 , −r12 sin ψ + r22 cos ψ) ˆ Z=z component of T where φ, ψ, θ note roll,yaw and pitch respectively. Monday, 23 May
  • 12. Experiment(1) The SR4000 and the Packbot robot with the sensor installed Monday, 23 May
  • 13. Experiment(2) Specification of SR4000 Monday, 23 May
  • 14. Experiment(3) Characteristic of SR4000 and Bumblebee2 Monday, 23 May
  • 15. Experiment(4) Original image Bumblebee2 SR4000 depth image depth image Characteristic of SR4000 and Bumblebee2 Kinect 11bit depth image Monday, 23 May
  • 16. Experiment(5) The graph below was obtained by pointing the Kinect at a planar surface, fitting a plane (using RANSAC) through the pointcloud, and checking the distance of the points in the pointcloud to that plane. source from: ROS kinect. SR4000 VS Kinect Monday, 23 May
  • 17. Experiment(6) Specification of the Modified Kinect Range 50cm~5m Accuracy 10mm~15mm Pixel array size 640x480 ,Programmable 320x240 Field of view 57(h)x47(v) Frequency 30Hz Weight 200g Price: $9095 Dimension 35x25x128mm Price 180AUD SR4000 VS Kinect Monday, 23 May
  • 18. Experiment(7) Pose error distribution 1. Zero movement Measurement error (φ, θ, ψ, x, y, z) = (0.1◦ , 0.2◦ , 0.2◦ , 7mm, 3mm, 6mm) Because of sensor measurement noise.(White noise) and mean, standard deviation is very good. 2. Rotation and translation movement Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm) mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm) standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm) Monday, 23 May
  • 19. Experiment(8) Rotation error distribution Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm) mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm) standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm) Monday, 23 May
  • 20. Translation error distribution Experiment(9) Movement (θ, ψ, x, y) = (−5.9◦ , 5.0◦ , 80mm, 130mm) mean error (φ, θ, ψ, x, y, z) = (−0.1◦ , 0.2◦ , −0.3◦ , 8mm, −2mm, 11mm) standard deviation error (φ, θ, ψ, x, y, z) = (0.5◦ , 0.4◦ , 0.4◦ , 13mm, 5mm, 11mm) Monday, 23 May
  • 21. Experiment(10) Rotation experiments and results Increase pitch angle Increase yaw angle Monday, 23 May
  • 22. Experiment(11) Translation experiments and results x position y position Monday, 23 May
  • 23. My though 1 Is this approach capable of real-time processing? 2 How accurate and robust is this proposed method in dynamic sensing environments such as on Quadrotor of on the unstructured road? 3 Kinect shows more noisy depth measurement than SR4000. Which means det(Y) can’t be 1 or -1 and fail to calculate R,T. How can we compensate this frame drop problem? Monday, 23 May
  • 24. Current status Done Future works 1. Retrieve RGB image and PointCloud using Openni Kinect driver in ROS. 2. Feature detection using gpusurf from University of Toronto, ASRL 3. Feature matching using gpu L2 matching and cross-check using OpenCV. 4. Obtain depth information of each pixel using PointCloud in ROS. 5. Calculate R,T using the proposed algorithm in this paper. 6. Compute 6D pose of camera every frame. “The method proposed in this paper might not be satisfied with our demand. Then figure out problems and solve them.” Monday, 23 May