SlideShare a Scribd company logo
21st International Conference on Pattern Recognition (ICPR 2012)
November 11-15, 2012. Tsukuba, Japan




                       Camera Calibration from a Single Image based on
                       Coupled Line Cameras and Rectangle Constraint

                                                Joo-Haeng Lee
                               Robot and Cognitive Systems Research Dept., ETRI
                                            joohaeng@etri.re.kr


                          Abstract                                        Several approaches are based on geometric prop-
                                                                      erties of a rectangle or a parallelogram. Wu et al.
       Given a single image of a scene rectangle of an un-            proposed a calibration method based on rectangles of
   known aspect ratio and size, we present a method to                known aspect ratio [4]. Li et al. designed a rectangle
   reconstruct the projective structure and to find camera             landmark to localize a mobile robot with an approxi-
   parameters including focal length, position, and orien-            mate rectangle constraint, which does not give an ana-
   tation. First, we solve the special case when the center           lytic solution [5]. Kim and Kweon propose a method to
   of a scene rectangle is projected to the image center. We          estimate intrinsic camera parameters from two or more
   formulate this problem with coupled line cameras and               rectangle of unknown aspect ratio [6]. A parallelogram
   present the analytic solution for it. Then, by prefixing            constraint can be used for calibration, which generally
   a simple preprocessing step, we solve the general case             requires more than two scene parallelograms projected
   without the centering constraint. We also provides a               in multiple-view images as in [7, 8, 9].
   determinant to tell if an image quadrilateral is a pro-                Our contribution can be summarized as follows.
   jection of a rectangle. We demonstrate the performance             Based on a geometric configuration of coupled line
   of the proposed method with synthetic and real data.               cameras, which models a simple camera of an unknown
                                                                      focal length, we give an analytic solution to reconstruct
                                                                      a complete projective structure from a single image of
   1. Introduction                                                    an unknown rectangle in the scene: no prior knowledge
                                                                      is required on the aspect ratio and correspondences.
      Camera calibration is one of the most classical topics          Then, the reconstruction result can be utilized in finding
   in computer vision research. We have an extensive list             the internal and external parameters of a camera: focal
   of related works providing mature solutions. In this pa-           length, rotation and translation. The proposed solution
   per, we are interested in a special problem to calibrate a         also provides a simple determinant to tell if an image
   camera from a single image of an unknown scene rect-               quadrilateral is a projection of a scene rectangle.
   angle. We do not assume any prior knowledge on cor-                    In a general framework for plane-based camera cal-
   respondences between scene and image points. Due to                ibration, camera parameters can be found first using
   the limited information, a simple camera model is used:            the image of the absolute conic (IAC) and its relation
   the focal length is the only unknown internal parameter.           with projective features such as vanishing points [2, 10].
      The problem in this paper has a different nature with           Then, a scene geometry can be reconstructed using a
   classical computer vision problems. In the PnP prob-               non-linear optimization on geometric constrains such
   lem, we find transformation matrix between the scene                orthogonality, which cannot be formulated as a closed-
   and the camera frames with prior knowledge on the                  form in general.
   correspondences between the scene and image points
   as well as the internal camera parameters [1]. In cam-             2. Problem Formulation
   era resection, we find the projection matrix from known
   correspondences between the scene and image points                 2.1. Line Camera
   without prior knowledge of camera parameters [2].
   Camera self-calibration does not rely on a known Eu-                  A line camera is a conceptual camera, which follows
   clidean structure, but requires multiple images from               the same projection model of a standard pin-hole cam-
   camera motion [3].                                                 era. (See Fig 1a.) Let v0 v2 be a line segment in the



978-4-9906441-1-6 ©2012 IAPR                                    758
pc


                    Ψ2 Ψ
                         0
               l2
                         l0
      s2            d         s0
                    Θ0
 v2           m                v0 v       m     v0        c
                                   2

      (a) Line camera              (b) Trajectory of the center of projection          (a) Pin-hole Camera   (b) Line Camera C0     (c) Line Camera C1

       Figure 1: A configuration of a line camera                                      Figure 2: A pin-hole camera and its decomposition into
                                                                                      coupled line cameras.

scene, which will be projected as a line u0 u2 in the line
camera C0 . Especially, we are interested in the posi-                                rectangle G in a pin-hole camera with the center of pro-
tion pc and the orientation θ0 of C0 when the principal                               jection at pc . Note that the principal axis passes through
axis passes through the center vm of v0 v2 and the center                             vm , um and pc .
um = (0, 0, 1) of image line.                                                            Using this configuration of coupled line cameras, we
    To simplify the formulation, we assume a canon-                                   find the orientation θi of each line camera Ci and the
ical configuration where vm vi = 1, and vm is                                          length d of the common principal axis from a given
placed at the origin of the world coordinate system:                                  quadrilateral H. Using the lengths of partial diagonals,
vm = (0, 0, 0). For derivation, we define followings:                                  li = um ui , we can find the relation between the cou-
d = pc vm , li = um ui , ψi = ∠vm pc vi , and                                         pled cameras Ci from (1):
s i = pc v i .
                                                                                                 tan ψ1   l1   sin θ1 (d − cos θ0 )
    In this configuration, we can derive the following re-                                               =    =                                      (3)
lation:                                                                                          tan ψ0   l0   sin θ0 (d − cos θ1 )
                 l2    d − cos θ0     d0                                              Manipulation of (2) and (3) leads to the system of non-
                    =             =                    (1)
                 l0    d + cos θ0     d1                                              linear equations:
where d0 = d − cos θ0 = s0 cos ψ0 and d1 = d +                                             β sin θ0 cos θ1 − cos θ0 sin θ1   cos θ0    cos θ1
cos θ0 = s2 cos ψ2 . We can derive the relation between                               d=                                   =        =
                                                                                                  β sin θ0 − sin θ1            α0       α1
θ0 and d from (1):                                                                                                                         (4)
                                                                                      where β = l1 /l0 . Using (4), the orientation θ0 can be
           cos θ0 = d (l0 − l2 )/(l0 + l2 ) = d α0                      (2)
                                                                                      represented with coefficients, α0 , α1 , and β, that are
where αi = (li − li+2 )/(li + li+2 ), which is solely                                 solely derived from a quadrilateral H:
derived from a image line ui ui+2 . Note that θ0 and d                                                               √
are sufficient parameters to determine the exact position                                      θ0         A0 + A1 ± 2 A0 A1
                                                                                          tan    =                         =              D±        (5)
of pc in 2D. When α0 is fixed, pc is defined on a certain                                       2               A 1 − A0
sphere as in Fig 1b. Once θi and d are known, additional                              where
parameters can be also determined: tan ψi = sin θi /d
and si = sin θi / sin ψi .                                                               A0 = B0 + 2B1 ,               A1 = B0 − 2B1
                                                                                                             2     2
                                                                                         B0 = 2(α0 − 1)          (α0   + α1 ) − 4α0 (α1 − 1)2 β 2
                                                                                                                          2       2

2.2. Coupled Line Cameras
                                                                                         B1 = (α0 − 1)2 (α0 − α1 )(α0 + α1 )
   A standard pin-hole camera can be represented with                                 The actual value of θ0 should be chosen to make d > 0
two line cameras coupled by sharing the center of pro-                                using (2). With known θ0 , we can compute values of
jection. (See Fig 2.) Let a scene rectangle G have                                    θ1 and d using (4). Based on the result of this section,
two diagonals v0 v2 and v1 v3 , each of which follows                                 the explicit coordinates of pc and the shape of G are
the canonical configuration in Section 2.1: vm vi = 1                                  reconstructed in Section 3.1.
where vm = (0, 0, 0). Each diagonal vi vi+2 is pro-
jected to an image line ui ui+2 in a line camera Ci .                                 2.3. Rectangle Determination
When two line cameras Ci share the center of projec-
tion pc , two image lines ui ui+2 intersect at um on the                                  In our configuration of coupled line cameras, the ex-
common principal axis, say pc vm . The four vertices ui                               istence of θi implies that H is the projection of a canon-
form a quadrilateral H, which is the projection of the                                ical rectangle G. As the orientation angle θ0 should be



                                                                                759
a positive real number, the expression inside the outer                 w1                                             w0
square root of (5) also should be a positive real number:
                                 √
                   A0 + A1 ± 2 A0 A1
           D± =                            >0         (6)
                          A1 − A0                                                                u3
                                                                                       g
                                                                                      u3                   Qg
                                                                                                      u2
The above condition guarantees that a quadrilateral H                                  g
                                                                                      u0   u0
                                                                                                um              Q
is the projection of a scene rectangle G, which will be                                          u1

fully reconstructed in Section 3.1.
                                                                   Figure 3: Finding a centered quad Q (in blue) from a
3. Reconstruction and Calibration                                  off-centered quad Qg (in red) using vanishing points.

3.1. Reconstructing Projective Structure
                                                                   and W denote a scalar, a camera and a transformation
   We define a projective structure as a frustum with a             matrices, respectively. Note that we assume a simple
rectangular base G and an apex in the center of projec-            camera model K: the only unknown internal parameter
tion pc . (See Fig 2.) Since G has a canonical form and            is a focal length f . Since the model plane is at z = 0, it
parameterized by φ, its vertices can be represented as             is straightforward to derive K and W. See [2] for details.
v0 = (1, 0, 0), v1 = (cos φ, sin φ, 0), v2 = −v0 , and
v3 = −v1 where φ is the crossing angle of two diag-                3.3. Off-Centered Quadrilateral
onals. Since vi = 1, the crossing angle φ is repre-
sented as                                                              In a centered case above, the centers of a scene rect-
                     cos φ = v0 , v1                 (7)           angle G and an image quadrilateral Q are both aligned
Two diagonals of the quadrilateral H intersect at the              at the principal axis, which is exceptional in reality. The
image center um on the principal axis with the crossing            proposed method, however, can be readily applied to
angle ρ.                                                           the off-centered case by prefixing a simple preprocess-
    To compute φ, we denote the center of projection as            ing step.
pc = (0, 0, 0), the image center as uc = (0, 0, −1),                   Let Qg and Gg denote an off-centered image quadri-
  c                                      m
the first two vertices of H as uc = (tan ψ0 , 0, −1)                lateral and a corresponding scene rectangle, respec-
                                    0
and uc = (cos ρ tan ψ1 , sin ρ tan ψ1 , −1), the center            tively. In the preprocessing step, we will find a cen-
       1
            c
of G as vm = (0, 0, −d), and the vertices of G as                  tered quadrilateral Q which will reconstruct the projec-
vi = si ui / uc . Since vi = vi − vm , φ can be com-
  c       c                      c     c                           tive structure where the projective correspondences be-
               i
puted using (7) with known values of ρ, d, si , and ψi .           tween Qg and Gg are preserved. We can show that such
    The coordinates of pc = (x, y, z) can be found by              Q can be geometrically derived from Qg using the prop-
solving following equations: d cos θ0 = x, d cos θ1 =              erties of parallel lines and vanishing points. (See Fig 3.)
x cos φ + y sin φ, and x2 + y 2 + z 2 = d2 , which are de-             We choose one vertex ug of Qg as the initial vertex
                                                                                                 i
rived from the projective structure. Now, the projective           u0 of Q, say u0 = ug . First, we compute two vanishing
                                                                                          0
structure has been reconstructed as a frustum with the             points wi from Qg : w0 as intersection of ug ug and ug ug
                                                                                                                0 1        2 3
base G(φ) and the apex pc .                                        and w1 as intersection of ug ug and ug ug . Let u2 be the
                                                                                                 0 3       1 2
    Note that the reconstructed G is guaranteed to be a            opposite vertex of u0 on the diagonal passing through
rectangle satisfying geometric constraints such as or-             u0 and um : u2 = a(um − u0 ) + u0 where a is an un-
thogonality since the proposed method is based on a                known scalar defining u2 . Then, we can make symbolic
canonical configuration of coupled line camera of Sec-              definitions of the other vertices of Q: u1 as intersection
tion 2.                                                            of two lines u0 w1 and u2 w0 , and u3 as intersection of
                                                                   u0 w0 and u2 w1 .
3.2. Camera Calibration                                                Note that the above definitions of ui guarantees that
                                                                   each pair of opposite edges are parallel before projec-
   Since our reconstruction method is formulated using             tive distortion. The symbolic definitions of ui above
a canonical configuration of coupled line cameras, the              can be combined in one constraint: the intersection of
derivation based on H can be substituted with a given              u0 u2 and u1 u3 equals to the image center um , which
image quadrilateral Q. Now we can find a homography                 is formulated as a single equation of one unknown a.
H using four point correspondences between G(φ) and                Since we can find the value of a, all the vertices ui of
Q. A homography is defined as H = sKW where s, K                    the centered image quadrilateral Q can be computed ac-



                                                             760
(a) Reference Gr and Qg       (b) Centered quad. Q

                                                                 Figure 6: Relative errors in reconstruction ( vi − vm
                                                                 and φ) and calibration (pc and f ) when added noises
                                                                 in a diameter of dmax pixels around each vertex of Qg .
                                                                 Result is the average of 100 experiments for dmax .
      (c) Reconstruction           (d) Rectifcation


                                                                     First, we find a centered quadrilateral Q using van-
 Figure 4: Experiment with a synthetic rectangle Gr .
                                                                 ishing points defined by Qg in Fig 4b. Then, the pro-
                                                                 jective structure is reconstructed as a frustum with the
                                                                 centered scene rectangle G and the center of projec-
                                                                 tion pc in Fig 4c. Then, the homography H between
                                                                 G and Q are derived, which is further decomposed to
                                                                 projection K and transformation W matrices as in Sec-
   (a) Reference Gr and Qg       (b) Centered quad. Q            tion 3.2. Using the homography H and the given off-
                                                                 centered quadrilateral Qg , we can reconstruct the target
                                                                 scene rectangle Gg (φg ) in Fig 4d.
                                                                     In this example, the overall error can be measured
                                                                 between Gr and Gg , which is found to be negligible:
                                                                                                     g g
                                                                 eφ = φg − φr < 10−14 and ||vi vi+2 − vi vi+2 || <
                                                                                                                r r
      (c) Reconstruction           (d) Rectifcation                 −13
                                                                 10 . (Note that these error metrics are sufficient since
                                                                 the geometric constraint such as orthogonality is always
                                                                 satisfied as in Section 3.1.) Then, the focal length can
Figure 5: Experiment for a near-infinite vanishing point.
                                                                 be correctly reconstructed: f = 1280 in this example.
                                                                     Our method handles an off-centered quadrilateral
cordingly.                                                       based on its vanishing points in the image plane, which
   As a remark, we need to specially handle singular             may be numerically unstable according to the paral-
cases where two vanishing points w0 and w1 are not de-           lelism in geometry. In Fig 5a, Gr has been placed to
fined [10]. However, experimental results show that the           generate a near-infinite vanishing points of Q in the
proposed method is numerically stable for near-singular          image plane as in Fig 5b. To test robustness near a
cases.                                                           singular case, we added a small quantity of rotation,
                                                                 say ∆φr = 10−n where n > 1, to the singular an-
                                                                 gle. According to the experiment, the proposed method
4. Experiments                                                   is very stable even when the configuration approaches
                                                                 the singular case. For example in Fig 5, when ∆φr =
   We demonstrate the experimental results for both              10−10 , we encounter a near-infinite vanishing point at
synthetic and real data. For experiment, the proposed            (1.56807 × 1013 , 1547.54). However, the error is al-
method was implemented in Mathematica, and a Log-                most negligible: eφ < 10−5 .
itech C910 webcam was used for real data with auto-                  To test robustness, we added random noises to each
focus enabled and 1280 × 720 resolution.                         vertex of reference quadrilateral Qg in a diameter of
                                                                 dmax pixels. (See Fig 6.) In a practical situation when
4.1. Synthetic Data                                              dmax = 1, overall errors are less than 2%. Generally,
                                                                 reconstruction errors are smaller than calibration errors.
   We tested with a synthetic scene containing a ref-
erence rectangle Gr (with random values of crossing              4.2. Real Data
angle φr , size, position and orientation) and a corre-
sponding image quadrilateral Qg generated by a known                We also tested with real images. In Fig 7a, the off-
camera in Fig 4a.                                                centered quadrilateral Qg is the image of an ISO A4



                                                           761
762

More Related Content

PPT
Fingerprint High Level Classification
PDF
Point Cloud Segmentation for 3D Reconstruction
PDF
Report bep thomas_blanken
PPT
Build Your Own 3D Scanner: Surface Reconstruction
PPT
Build Your Own 3D Scanner: Introduction
PDF
Structure and Motion - 3D Reconstruction of Cameras and Structure
PDF
Lecture 02 yasutaka furukawa - 3 d reconstruction with priors
PDF
Practical Digital Image Processing 2
Fingerprint High Level Classification
Point Cloud Segmentation for 3D Reconstruction
Report bep thomas_blanken
Build Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Introduction
Structure and Motion - 3D Reconstruction of Cameras and Structure
Lecture 02 yasutaka furukawa - 3 d reconstruction with priors
Practical Digital Image Processing 2

What's hot (20)

PPTX
Tennis video shot classification based on support vector
PDF
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
PDF
Mesh Processing Course : Differential Calculus
PPT
Build Your Own 3D Scanner: The Mathematics of 3D Triangulation
PDF
Stereo vision
PPT
Build Your Own 3D Scanner: Conclusion
PDF
Research Inventy : International Journal of Engineering and Science
PPT
Build Your Own 3D Scanner: 3D Scanning with Swept-Planes
PDF
PDF
conference_poster_4
PDF
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
PDF
A Beginner's Guide to Monocular Depth Estimation
PPTX
Orb feature by nitin
PDF
Practical Digital Image Processing 5
PDF
Welcome to International Journal of Engineering Research and Development (IJERD)
PDF
Practical Digital Image Processing 3
PDF
Build Your Own 3D Scanner: Course Notes
PDF
LN s05-machine vision-s2
PDF
Estimating Human Pose from Occluded Images (ACCV 2009)
PPT
Chapter 1 introduction (Image Processing)
Tennis video shot classification based on support vector
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Mesh Processing Course : Differential Calculus
Build Your Own 3D Scanner: The Mathematics of 3D Triangulation
Stereo vision
Build Your Own 3D Scanner: Conclusion
Research Inventy : International Journal of Engineering and Science
Build Your Own 3D Scanner: 3D Scanning with Swept-Planes
conference_poster_4
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
A Beginner's Guide to Monocular Depth Estimation
Orb feature by nitin
Practical Digital Image Processing 5
Welcome to International Journal of Engineering Research and Development (IJERD)
Practical Digital Image Processing 3
Build Your Own 3D Scanner: Course Notes
LN s05-machine vision-s2
Estimating Human Pose from Occluded Images (ACCV 2009)
Chapter 1 introduction (Image Processing)
Ad

Viewers also liked (16)

PDF
Secret document google magic
PDF
My first IKEA assembly
PPT
Camera Calibration Market
PPT
Camera model
PPT
Camera calibration from a single image based on coupled line cameras and rect...
PPTX
camera calibration
PDF
Lecture9 camera calibration
PDF
Camera calibration
PDF
Lecture 2 Camera Calibration
PDF
Bouguet's MatLab Camera Calibration Toolbox
PDF
Viktor Sdobnikov - Computer Vision for Advanced Driver Assistance Systems (AD...
PPT
Michael Norel - High Accuracy Camera Calibration
PDF
Camera Calibration for Video Surveillance
PDF
Bouguet's MatLab Camera Calibration Toolbox for Stereo Camera
PDF
My lyn tutorial 2009
PDF
The Technology Research of Camera Calibration Based On LabVIEW
Secret document google magic
My first IKEA assembly
Camera Calibration Market
Camera model
Camera calibration from a single image based on coupled line cameras and rect...
camera calibration
Lecture9 camera calibration
Camera calibration
Lecture 2 Camera Calibration
Bouguet's MatLab Camera Calibration Toolbox
Viktor Sdobnikov - Computer Vision for Advanced Driver Assistance Systems (AD...
Michael Norel - High Accuracy Camera Calibration
Camera Calibration for Video Surveillance
Bouguet's MatLab Camera Calibration Toolbox for Stereo Camera
My lyn tutorial 2009
The Technology Research of Camera Calibration Based On LabVIEW
Ad

Similar to Camera Calibration from a Single Image based on Coupled Line Cameras and Rectangle Constraint (20)

PDF
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
PDF
3-D Visual Reconstruction: A System Perspective
PDF
Lecture21
PDF
Lecture21
PDF
New geometric interpretation and analytic solution for quadrilateral reconstr...
PDF
Color Img at Prisma Network meeting 2009
PDF
Camera parameters
PPT
PPTX
Camera calibration technique
PDF
Geometric Separators and the Parabolic Lift
PDF
New geometric interpretation and analytic solution for quadrilateral reconstr...
PDF
Putting Objects in Perspective
PDF
Lecture18
PDF
Lecture 10h
PPT
3DSensing.ppt
PDF
The geometry of perspective projection
PDF
A New In-Camera Imaging Model For Color Computer Vision And Its Application
DOCX
Conversion from rectangular to polar coordinates and gradient wind
PDF
PPT
Introduction to homography
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
3-D Visual Reconstruction: A System Perspective
Lecture21
Lecture21
New geometric interpretation and analytic solution for quadrilateral reconstr...
Color Img at Prisma Network meeting 2009
Camera parameters
Camera calibration technique
Geometric Separators and the Parabolic Lift
New geometric interpretation and analytic solution for quadrilateral reconstr...
Putting Objects in Perspective
Lecture18
Lecture 10h
3DSensing.ppt
The geometry of perspective projection
A New In-Camera Imaging Model For Color Computer Vision And Its Application
Conversion from rectangular to polar coordinates and gradient wind
Introduction to homography

More from Joo-Haeng Lee (11)

PDF
Notes on Reinforcement Learning - v0.1
PDF
간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법
PDF
이동 로봇을 위한 사각형 기반 위치 추정의 기하학적 방법
PDF
Spatial AR: Toward Augmentation of Ambient and Effective Interaction Channel
PDF
화홍문 사진 모음 및 편액에 대한 CLC 기반의 사각형 복원 (An Application of CLC on a framed picture ...
PDF
Inverse Perspective Projection of Convex Quadrilaterals
PDF
Modeling and rendering of layered materials (다층 재질의 모델링 및 렌더링)
PDF
공간증강현실을 이용한 곡선의 디자인 (HCI Korea 2013)
PDF
Calibration Issues in FRC: Camera, Projector, Kinematics based Hybrid Approac...
PDF
Robotic Spatial AR (로봇 공간증강현실 기술 소개)
PPTX
Ribs and Fans of Bezier Curves and Surfaces with Applications
Notes on Reinforcement Learning - v0.1
간단한 기구부와 결합한 공간증강현실 시스템의 샘플 기반 제어 방법
이동 로봇을 위한 사각형 기반 위치 추정의 기하학적 방법
Spatial AR: Toward Augmentation of Ambient and Effective Interaction Channel
화홍문 사진 모음 및 편액에 대한 CLC 기반의 사각형 복원 (An Application of CLC on a framed picture ...
Inverse Perspective Projection of Convex Quadrilaterals
Modeling and rendering of layered materials (다층 재질의 모델링 및 렌더링)
공간증강현실을 이용한 곡선의 디자인 (HCI Korea 2013)
Calibration Issues in FRC: Camera, Projector, Kinematics based Hybrid Approac...
Robotic Spatial AR (로봇 공간증강현실 기술 소개)
Ribs and Fans of Bezier Curves and Surfaces with Applications

Recently uploaded (20)

PPT
Teaching material agriculture food technology
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
cuic standard and advanced reporting.pdf
PDF
Encapsulation theory and applications.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
Cloud computing and distributed systems.
PDF
Electronic commerce courselecture one. Pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Big Data Technologies - Introduction.pptx
PPTX
Spectroscopy.pptx food analysis technology
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Teaching material agriculture food technology
Per capita expenditure prediction using model stacking based on satellite ima...
MYSQL Presentation for SQL database connectivity
Unlocking AI with Model Context Protocol (MCP)
cuic standard and advanced reporting.pdf
Encapsulation theory and applications.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Mobile App Security Testing_ A Comprehensive Guide.pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Cloud computing and distributed systems.
Electronic commerce courselecture one. Pdf
Network Security Unit 5.pdf for BCA BBA.
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Big Data Technologies - Introduction.pptx
Spectroscopy.pptx food analysis technology
“AI and Expert System Decision Support & Business Intelligence Systems”
Building Integrated photovoltaic BIPV_UPV.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Diabetes mellitus diagnosis method based random forest with bat algorithm
Build a system with the filesystem maintained by OSTree @ COSCUP 2025

Camera Calibration from a Single Image based on Coupled Line Cameras and Rectangle Constraint

  • 1. 21st International Conference on Pattern Recognition (ICPR 2012) November 11-15, 2012. Tsukuba, Japan Camera Calibration from a Single Image based on Coupled Line Cameras and Rectangle Constraint Joo-Haeng Lee Robot and Cognitive Systems Research Dept., ETRI joohaeng@etri.re.kr Abstract Several approaches are based on geometric prop- erties of a rectangle or a parallelogram. Wu et al. Given a single image of a scene rectangle of an un- proposed a calibration method based on rectangles of known aspect ratio and size, we present a method to known aspect ratio [4]. Li et al. designed a rectangle reconstruct the projective structure and to find camera landmark to localize a mobile robot with an approxi- parameters including focal length, position, and orien- mate rectangle constraint, which does not give an ana- tation. First, we solve the special case when the center lytic solution [5]. Kim and Kweon propose a method to of a scene rectangle is projected to the image center. We estimate intrinsic camera parameters from two or more formulate this problem with coupled line cameras and rectangle of unknown aspect ratio [6]. A parallelogram present the analytic solution for it. Then, by prefixing constraint can be used for calibration, which generally a simple preprocessing step, we solve the general case requires more than two scene parallelograms projected without the centering constraint. We also provides a in multiple-view images as in [7, 8, 9]. determinant to tell if an image quadrilateral is a pro- Our contribution can be summarized as follows. jection of a rectangle. We demonstrate the performance Based on a geometric configuration of coupled line of the proposed method with synthetic and real data. cameras, which models a simple camera of an unknown focal length, we give an analytic solution to reconstruct a complete projective structure from a single image of 1. Introduction an unknown rectangle in the scene: no prior knowledge is required on the aspect ratio and correspondences. Camera calibration is one of the most classical topics Then, the reconstruction result can be utilized in finding in computer vision research. We have an extensive list the internal and external parameters of a camera: focal of related works providing mature solutions. In this pa- length, rotation and translation. The proposed solution per, we are interested in a special problem to calibrate a also provides a simple determinant to tell if an image camera from a single image of an unknown scene rect- quadrilateral is a projection of a scene rectangle. angle. We do not assume any prior knowledge on cor- In a general framework for plane-based camera cal- respondences between scene and image points. Due to ibration, camera parameters can be found first using the limited information, a simple camera model is used: the image of the absolute conic (IAC) and its relation the focal length is the only unknown internal parameter. with projective features such as vanishing points [2, 10]. The problem in this paper has a different nature with Then, a scene geometry can be reconstructed using a classical computer vision problems. In the PnP prob- non-linear optimization on geometric constrains such lem, we find transformation matrix between the scene orthogonality, which cannot be formulated as a closed- and the camera frames with prior knowledge on the form in general. correspondences between the scene and image points as well as the internal camera parameters [1]. In cam- 2. Problem Formulation era resection, we find the projection matrix from known correspondences between the scene and image points 2.1. Line Camera without prior knowledge of camera parameters [2]. Camera self-calibration does not rely on a known Eu- A line camera is a conceptual camera, which follows clidean structure, but requires multiple images from the same projection model of a standard pin-hole cam- camera motion [3]. era. (See Fig 1a.) Let v0 v2 be a line segment in the 978-4-9906441-1-6 ©2012 IAPR 758
  • 2. pc Ψ2 Ψ 0 l2 l0 s2 d s0 Θ0 v2 m v0 v m v0 c 2 (a) Line camera (b) Trajectory of the center of projection (a) Pin-hole Camera (b) Line Camera C0 (c) Line Camera C1 Figure 1: A configuration of a line camera Figure 2: A pin-hole camera and its decomposition into coupled line cameras. scene, which will be projected as a line u0 u2 in the line camera C0 . Especially, we are interested in the posi- rectangle G in a pin-hole camera with the center of pro- tion pc and the orientation θ0 of C0 when the principal jection at pc . Note that the principal axis passes through axis passes through the center vm of v0 v2 and the center vm , um and pc . um = (0, 0, 1) of image line. Using this configuration of coupled line cameras, we To simplify the formulation, we assume a canon- find the orientation θi of each line camera Ci and the ical configuration where vm vi = 1, and vm is length d of the common principal axis from a given placed at the origin of the world coordinate system: quadrilateral H. Using the lengths of partial diagonals, vm = (0, 0, 0). For derivation, we define followings: li = um ui , we can find the relation between the cou- d = pc vm , li = um ui , ψi = ∠vm pc vi , and pled cameras Ci from (1): s i = pc v i . tan ψ1 l1 sin θ1 (d − cos θ0 ) In this configuration, we can derive the following re- = = (3) lation: tan ψ0 l0 sin θ0 (d − cos θ1 ) l2 d − cos θ0 d0 Manipulation of (2) and (3) leads to the system of non- = = (1) l0 d + cos θ0 d1 linear equations: where d0 = d − cos θ0 = s0 cos ψ0 and d1 = d + β sin θ0 cos θ1 − cos θ0 sin θ1 cos θ0 cos θ1 cos θ0 = s2 cos ψ2 . We can derive the relation between d= = = β sin θ0 − sin θ1 α0 α1 θ0 and d from (1): (4) where β = l1 /l0 . Using (4), the orientation θ0 can be cos θ0 = d (l0 − l2 )/(l0 + l2 ) = d α0 (2) represented with coefficients, α0 , α1 , and β, that are where αi = (li − li+2 )/(li + li+2 ), which is solely solely derived from a quadrilateral H: derived from a image line ui ui+2 . Note that θ0 and d √ are sufficient parameters to determine the exact position θ0 A0 + A1 ± 2 A0 A1 tan = = D± (5) of pc in 2D. When α0 is fixed, pc is defined on a certain 2 A 1 − A0 sphere as in Fig 1b. Once θi and d are known, additional where parameters can be also determined: tan ψi = sin θi /d and si = sin θi / sin ψi . A0 = B0 + 2B1 , A1 = B0 − 2B1 2 2 B0 = 2(α0 − 1) (α0 + α1 ) − 4α0 (α1 − 1)2 β 2 2 2 2.2. Coupled Line Cameras B1 = (α0 − 1)2 (α0 − α1 )(α0 + α1 ) A standard pin-hole camera can be represented with The actual value of θ0 should be chosen to make d > 0 two line cameras coupled by sharing the center of pro- using (2). With known θ0 , we can compute values of jection. (See Fig 2.) Let a scene rectangle G have θ1 and d using (4). Based on the result of this section, two diagonals v0 v2 and v1 v3 , each of which follows the explicit coordinates of pc and the shape of G are the canonical configuration in Section 2.1: vm vi = 1 reconstructed in Section 3.1. where vm = (0, 0, 0). Each diagonal vi vi+2 is pro- jected to an image line ui ui+2 in a line camera Ci . 2.3. Rectangle Determination When two line cameras Ci share the center of projec- tion pc , two image lines ui ui+2 intersect at um on the In our configuration of coupled line cameras, the ex- common principal axis, say pc vm . The four vertices ui istence of θi implies that H is the projection of a canon- form a quadrilateral H, which is the projection of the ical rectangle G. As the orientation angle θ0 should be 759
  • 3. a positive real number, the expression inside the outer w1 w0 square root of (5) also should be a positive real number: √ A0 + A1 ± 2 A0 A1 D± = >0 (6) A1 − A0 u3 g u3 Qg u2 The above condition guarantees that a quadrilateral H g u0 u0 um Q is the projection of a scene rectangle G, which will be u1 fully reconstructed in Section 3.1. Figure 3: Finding a centered quad Q (in blue) from a 3. Reconstruction and Calibration off-centered quad Qg (in red) using vanishing points. 3.1. Reconstructing Projective Structure and W denote a scalar, a camera and a transformation We define a projective structure as a frustum with a matrices, respectively. Note that we assume a simple rectangular base G and an apex in the center of projec- camera model K: the only unknown internal parameter tion pc . (See Fig 2.) Since G has a canonical form and is a focal length f . Since the model plane is at z = 0, it parameterized by φ, its vertices can be represented as is straightforward to derive K and W. See [2] for details. v0 = (1, 0, 0), v1 = (cos φ, sin φ, 0), v2 = −v0 , and v3 = −v1 where φ is the crossing angle of two diag- 3.3. Off-Centered Quadrilateral onals. Since vi = 1, the crossing angle φ is repre- sented as In a centered case above, the centers of a scene rect- cos φ = v0 , v1 (7) angle G and an image quadrilateral Q are both aligned Two diagonals of the quadrilateral H intersect at the at the principal axis, which is exceptional in reality. The image center um on the principal axis with the crossing proposed method, however, can be readily applied to angle ρ. the off-centered case by prefixing a simple preprocess- To compute φ, we denote the center of projection as ing step. pc = (0, 0, 0), the image center as uc = (0, 0, −1), Let Qg and Gg denote an off-centered image quadri- c m the first two vertices of H as uc = (tan ψ0 , 0, −1) lateral and a corresponding scene rectangle, respec- 0 and uc = (cos ρ tan ψ1 , sin ρ tan ψ1 , −1), the center tively. In the preprocessing step, we will find a cen- 1 c of G as vm = (0, 0, −d), and the vertices of G as tered quadrilateral Q which will reconstruct the projec- vi = si ui / uc . Since vi = vi − vm , φ can be com- c c c c tive structure where the projective correspondences be- i puted using (7) with known values of ρ, d, si , and ψi . tween Qg and Gg are preserved. We can show that such The coordinates of pc = (x, y, z) can be found by Q can be geometrically derived from Qg using the prop- solving following equations: d cos θ0 = x, d cos θ1 = erties of parallel lines and vanishing points. (See Fig 3.) x cos φ + y sin φ, and x2 + y 2 + z 2 = d2 , which are de- We choose one vertex ug of Qg as the initial vertex i rived from the projective structure. Now, the projective u0 of Q, say u0 = ug . First, we compute two vanishing 0 structure has been reconstructed as a frustum with the points wi from Qg : w0 as intersection of ug ug and ug ug 0 1 2 3 base G(φ) and the apex pc . and w1 as intersection of ug ug and ug ug . Let u2 be the 0 3 1 2 Note that the reconstructed G is guaranteed to be a opposite vertex of u0 on the diagonal passing through rectangle satisfying geometric constraints such as or- u0 and um : u2 = a(um − u0 ) + u0 where a is an un- thogonality since the proposed method is based on a known scalar defining u2 . Then, we can make symbolic canonical configuration of coupled line camera of Sec- definitions of the other vertices of Q: u1 as intersection tion 2. of two lines u0 w1 and u2 w0 , and u3 as intersection of u0 w0 and u2 w1 . 3.2. Camera Calibration Note that the above definitions of ui guarantees that each pair of opposite edges are parallel before projec- Since our reconstruction method is formulated using tive distortion. The symbolic definitions of ui above a canonical configuration of coupled line cameras, the can be combined in one constraint: the intersection of derivation based on H can be substituted with a given u0 u2 and u1 u3 equals to the image center um , which image quadrilateral Q. Now we can find a homography is formulated as a single equation of one unknown a. H using four point correspondences between G(φ) and Since we can find the value of a, all the vertices ui of Q. A homography is defined as H = sKW where s, K the centered image quadrilateral Q can be computed ac- 760
  • 4. (a) Reference Gr and Qg (b) Centered quad. Q Figure 6: Relative errors in reconstruction ( vi − vm and φ) and calibration (pc and f ) when added noises in a diameter of dmax pixels around each vertex of Qg . Result is the average of 100 experiments for dmax . (c) Reconstruction (d) Rectifcation First, we find a centered quadrilateral Q using van- Figure 4: Experiment with a synthetic rectangle Gr . ishing points defined by Qg in Fig 4b. Then, the pro- jective structure is reconstructed as a frustum with the centered scene rectangle G and the center of projec- tion pc in Fig 4c. Then, the homography H between G and Q are derived, which is further decomposed to projection K and transformation W matrices as in Sec- (a) Reference Gr and Qg (b) Centered quad. Q tion 3.2. Using the homography H and the given off- centered quadrilateral Qg , we can reconstruct the target scene rectangle Gg (φg ) in Fig 4d. In this example, the overall error can be measured between Gr and Gg , which is found to be negligible: g g eφ = φg − φr < 10−14 and ||vi vi+2 − vi vi+2 || < r r (c) Reconstruction (d) Rectifcation −13 10 . (Note that these error metrics are sufficient since the geometric constraint such as orthogonality is always satisfied as in Section 3.1.) Then, the focal length can Figure 5: Experiment for a near-infinite vanishing point. be correctly reconstructed: f = 1280 in this example. Our method handles an off-centered quadrilateral cordingly. based on its vanishing points in the image plane, which As a remark, we need to specially handle singular may be numerically unstable according to the paral- cases where two vanishing points w0 and w1 are not de- lelism in geometry. In Fig 5a, Gr has been placed to fined [10]. However, experimental results show that the generate a near-infinite vanishing points of Q in the proposed method is numerically stable for near-singular image plane as in Fig 5b. To test robustness near a cases. singular case, we added a small quantity of rotation, say ∆φr = 10−n where n > 1, to the singular an- gle. According to the experiment, the proposed method 4. Experiments is very stable even when the configuration approaches the singular case. For example in Fig 5, when ∆φr = We demonstrate the experimental results for both 10−10 , we encounter a near-infinite vanishing point at synthetic and real data. For experiment, the proposed (1.56807 × 1013 , 1547.54). However, the error is al- method was implemented in Mathematica, and a Log- most negligible: eφ < 10−5 . itech C910 webcam was used for real data with auto- To test robustness, we added random noises to each focus enabled and 1280 × 720 resolution. vertex of reference quadrilateral Qg in a diameter of dmax pixels. (See Fig 6.) In a practical situation when 4.1. Synthetic Data dmax = 1, overall errors are less than 2%. Generally, reconstruction errors are smaller than calibration errors. We tested with a synthetic scene containing a ref- erence rectangle Gr (with random values of crossing 4.2. Real Data angle φr , size, position and orientation) and a corre- sponding image quadrilateral Qg generated by a known We also tested with real images. In Fig 7a, the off- camera in Fig 4a. centered quadrilateral Qg is the image of an ISO A4 761
  • 5. 762