SlideShare a Scribd company logo
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation
Agenda
• Introduction
• Optical Navigation and Image Processing
• Role of Image Processing in Autonomous
Optical Navigation
• Fundamentals Of Autonomous Optical Navigation
• Optical Navigation Measurement Model
• Pinhole Camera Model
• Line-of-Sight Unit Vector
• Imaging Error Sources
• Image Processing Procedure
2
• Edge Detection
• Canny Edge Detection
• Non-Maximum Suppression (NMS)
• Double-Threshold Segmentation
• Edge Tracking by Hysteresis
• Pseudo-Edge Removal
• Limb Profile Fitting
• Least Squares-Based Ellipse Fitting
• Levenberg-Marquardt-Based Ellipse Fitting
• Centroid Extraction
• Simulation Results And Analysis
Introduction
Historically, spacecraft navigation relied heavily on Earth-based Deep
Space Network (DSN) range and range-rate measurements. While effective,
this method had drawbacks such as high costs and a low update frequency
due to the limited availability of tracking stations on Earth.
To address these limitations, a new solution known as autonomous optical
navigation was developed. This refers to a spacecraft's ability to navigate
independently of Earth-based communication systems by using optical
sensors and image processing algorithms.
This method improves the spacecraft's autonomy and is especially useful
during phases where traditional radio tracking navigation is limited or when
the system fails.
3
Optical Navigation
And
Image Processing
Role of Image Processing in Autonomous Optical Navigation
Optical Navigation And Image Processing
R O L E O F I M AG E P R O C E S S I N G I N AU TO N O M O U S O P T I C AL
N AV I G AT I O N :
Image processing play an extremely important part in autonomous optical navigation, serving as
the technological backbone that allows spacecraft to interpret and extract important navigation
information from captured images. Image processing algorithms are critical in extracting
essential navigational observables from raw images, such as the line-of-sight vector, apparent
diameter, and centroid. These algorithms aim to overcome the difficulties posed by spacecraft's
limited computational capacity and the unique environmental conditions of deep space. Image
processing enables spacecraft to autonomously analyze visual data, make informed
navigational decisions, and respond to dynamic changes in their surroundings by improving the
efficiency and accuracy of data extraction. The implementation of advanced image processing
techniques not only improves navigation accuracy but also allows for real-time decision-making.
5
Fundamentals Of
Autonomous Optical
Navigation
Optical Navigation Measurement Model
Optical Navigation Measurement Model
O P T I C A L N AV I G AT I O N M E A S U R E M E N T :
The optical navigation measurement model refers to the mathematical representation or
framework used to describe and quantify the measurements obtained from optical sensors for
navigation purposes. Optical navigation involves utilizing information from visual cues or
features in the environment to determine the position, orientation, or movement of a vehicle,
such as a spacecraft, drone, or autonomous vehicle.
The measurement model typically includes various parameters and equations that relate the
observed optical measurements to the actual state of the system being navigated. This model is
crucial for understanding how the optical sensor data can be interpreted and integrated into a
navigation system to estimate the system's pose (position and orientation).
7
Optical Navigation
Measurement Model
Pinhole Camera Model
Optical Navigation Measurement Model
P I N H O L E C A M E R A M O D E L :
The pinhole camera model provides a conceptual framework for understanding how a
spacecraft's camera captures, projects, and transforms light from distant objects in space into
digital data. The use of this model makes it easier to extract navigational information for
autonomous spacecraft navigation, which contributes to the success of space missions. The
spacecraft, equipped with an optical navigation camera, captures a raw image of a celestial
body (e.g., a planet). The captured image is processed using the pinhole camera model. This
model transforms the 3D coordinates (X, Y, Z) of points in the space to 2D coordinates (u, v) on
the image plane, considering the focal length of the camera and the orientation of its axes.
9
Optical Navigation Measurement Model
P I N H O L E C A M E R A M O D E L :
By applying the pinhole camera model, you use the captured pixel coordinates (u,v), along with the
camera's focal length (f), to estimate the 3D coordinates (X,Y,Z) of the scene points.
The equations involved are:
The values of X,Y,Z change dynamically depending on the captured image, the motion of the
spacecraft, and the use of the pinhole camera model. Depth information is critical for accurate 3D
reconstruction, and the final coordinates may be transformed further to align with a particular
coordinate system.
10
Optical Navigation
Measurement Model
Line-of-Sight Unit Vector
Optical Navigation Measurement Model
L I N E - O F - S I G H T U N I T V E C TO R :
Line-of-Sight (LOS) vector represents the direction from the spacecraft to an observed object. It
is a unit vector that points from the camera location to the point on the observed object
corresponding to the center of the camera's field of view. The LOS vector provides information
about the orientation and direction of the observed object relative to the spacecraft.
The pinhole camera model establishes a straightforward relationship between a point on the
detector plane (where the image is formed) and the corresponding line-of-sight unit vector. This
relationship involves the focal length of the camera, the coordinates of the point on the detector
plane (usually denoted as u and v), and some geometric parameters.
12
Optical Navigation Measurement Model
L I N E - O F - S I G H T U N I T V E C TO R :
The mathematical representation of this relationship is given by the formula:
• 𝐞𝐢
𝐜
is the line-of-sight unit vector.
• u and v are the coordinates of the point on the detector plane.
• f is the focal length of the camera.
13
Optical Navigation Measurement Model
L I N E - O F - S I G H T U N I T V E C TO R :
Once this vector is obtained, it is typically necessary to rotate it from the camera frame to the inertial
frame. This rotation is achieved using transformation matrices, which are mathematical tools that
describe how vectors or coordinates in one coordinate system can be transformed into another.
Camera Frame: The camera frame is a reference frame attached to the spacecraft's imaging system
or camera. It is a local coordinate system centered on the focal point of the camera, and its
orientation is defined by the intrinsic parameters of the camera, such as its optical axis and principal
point.
𝐈𝐧𝐞𝐫𝐭𝐢𝐚𝐥 𝐅𝐫𝐚𝐦𝐞: The inertial frame is a fixed reference frame in space that does not change
regardless of the movement of the spacecraft. It serves as an external reference to indicate objects'
absolute orientation and position in space.
14
Optical Navigation Measurement Model
L I N E - O F - S I G H T U N I T V E C TO R ( R OTAT I O N O P E R AT I O N ) :
𝑻𝑩
𝑰
: This is the transformation matrix between the body frame and the inertial frame. It explains how to
transform vectors or coordinates in the body frame (the frame attached to the spacecraft) into the inertial
frame (a fixed space reference frame).
𝑻𝑪
𝑩
: This is the transformation matrix between the camera frame and the body frame. It describes how to
transform vectors or coordinates in the camera frame into the body frame.
𝒆𝒊
𝒄
: This is the line-of-sight vector in the camera frame, representing the direction from the camera to a
specific point in space.
𝒆𝒊
𝑰
: This is the rotated line-of-sight vector in the inertial frame, providing information about the orientation
and relative position of objects in space with respect to the spacecraft.
15
Optical Navigation
Measurement Model
Imaging Error Sources
Optical Navigation Measurement Model
I M A G I N G E R R O R S O U R C E S :
The pinhole camera model does not include imaging error sources such as stellar aberration,
parallax, stray light, misalignment, and diffraction. To create a more realistic mathematical
model, both external and internal error sources must be considered.
External Errors: These are factors external to the camera system, like the movement of stars
or the influence of other light sources. A realistic model must consider how these factors affect
the observed images.
Internal Errors: These are errors originating within the camera system itself, such as
inaccuracies in the camera's internal geometry, lens imperfections, or electronic noise. A
comprehensive model should incorporate these internal error sources for accuracy.
17
Optical Navigation Measurement Model
I M A G I N G E R R O R S O U R C E S :
Stellar Aberration: The model doesn't account for the apparent shift in star positions due to the
motion of the spacecraft.
Parallax: It doesn't consider the change in the apparent position of celestial objects when
viewed from different points in space.
Stray Light: External light entering the camera system unintentionally is not considered.
Misalignment: Deviations in the alignment of the camera components are not factored in.
Diffraction: Effects of light bending around obstacles or edges are ignored.
18
Fundamentals Of
Autonomous Optical
Navigation
Image Processing Procedure
Fundamentals Of Autonomous Optical
Navigation
I M AG E P R O C E S S I N G P R O C E D U R E :
image processing procedure involves preparing the raw image, identifying genuine edge data
points, fitting ellipses to define the celestial object's limb, and computing the centroid for
extracting vital navigation measurements. The use of specific algorithms and techniques at
each step ensures the accuracy and reliability of the autonomous optical navigation system.
20
Image Processing Procedure:
21
Image Processing Procedure
S T E P - 1 G R E Y I M AG E :
The spacecraft, equipped with an
optical navigation camera, captures a
raw image of a celestial body (e.g., a
planet).
22
Image Processing Procedure
S T E P - 2 I M AG E P R E - P R O C E S S I N G :
This initial step involves techniques to
enhance the raw image and prepare it
for subsequent processing. Operations
like threshold segmentation, image
smoothing, sharpening, and correction
are applied to simplify the image,
reduce noise, and highlight important
features.
23
Image Processing Procedure
S T E P - 3 C AN N Y E D G E D E T E C T I O N :
Edge detection is crucial for identifying
significant features in the image. The
Canny Edge Detection algorithm is
often used, which involves smoothing
the image with a Gaussian filter,
calculating the gradient magnitude and
direction, and applying non-maximum
suppression to identify potential edges.
24
Image Processing Procedure
S T E P - 4 P S E U D O - E D G E R E M O VAL :
Due to factors such as the sun's
azimuth and camera viewing angles,
pseudo-edges (false edges) may be
present in the edge detection results.
These are typically caused by backlight-
shaded areas. Pseudo-edge removal
involves calculating the angle between
the lighting direction and the gradient
direction of edges to distinguish real
edges from false edges.
25
Image Processing Procedure
S T E P - 5 L E AS T S Q U AR E S B AS E D E L L I P S E - F I T T I N G AL G O R I T H M :
The perspective projection of a celestial
object, like a planet, onto the image
plane forms an ellipse. Limb profile
fitting involves fitting an ellipse to the
candidate edge points identified in the
image. This step is essential for
determining the apparent shape and
orientation of the observed celestial
object.
26
Image Processing Procedure
S T E P - 6 L E V E N B E R G - M AR Q U A R D T B AS E D E L L I P S E - F I T T I N G
AL G O R I T H M :
In some cases, particularly when
observing planets, fitting an ellipsoid or
sphere to the detected limb profile can
provide a more accurate representation
of the object's shape.
27
Image Processing Procedure
S T E P - 7 C E N T R O I D C O M P U TAT I O N :
The final step involves computing the centroid of the fitted limb profile or shape. The
centroid provides a representative point that can be used for navigation measurements.
28
Edge Detection
Canny Edge Detection
Edge Detection
C A N N Y E D G E D E T E C T I O N :
Canny Edge Detection is a popular image processing technique used to identify edges within an image.
Developed by John F. Canny in 1986, this method is widely employed due to its ability to accurately detect
edges while minimizing false positives. Edge detection is a fundamental step in image analysis, computer vision,
and object recognition.
The Canny Edge Detection algorithm involves several key steps:
• Smoothing (Gaussian Filtering)
• Gradient Calculation
• Non-Maximum Suppression (NMS)
• Double-Threshold Segmentation
• Edge Tracking by Hysteresis
30
Canny Edge Detection
S M O OT H I N G ( G AU S S I A N F I LT E R I N G ) :
Before detecting edges, the image is convolved with a Gaussian filter. This step reduces noise
and ensures that the subsequent edge detection is less sensitive to minor variations. two-
dimensional Gaussian function used in image processing:
G (i,j): the variables i and j represent the pixel coordinates in a two-dimensional image.
e: Euler's number, approximately equal to 2.71828, is the base of the natural logarithm.
σ (sigma): is a parameter known as the standard deviation, which controls the width of the
Gaussian function. A larger σ results in a wider and smoother distribution.
31
Canny Edge Detection
S M O OT H I N G ( G AU S S I A N F I LT E R I N G ) :
S (i,j): This is the result of the convolution operation and represents the smoothed or filtered
intensity value at the pixel location (i,j) in the image.
G (i,j): This is the value of the Gaussian function at the pixel coordinates (i,j). The Gaussian
function is used for image smoothing, and its values are determined by the spatial distribution of
the function centered around the pixel.
I (i,j): This represents the intensity values of the raw image at the pixel location (i,j). These are
the original pixel values of the image.
32
Canny Edge Detection
G R A D I E N T C A LC U L AT I O N :
Gradient Calculation step in the Canny Edge Detection algorithm:
• Horizontal Derivative (P[i,j])
• Vertical Derivative (Q[i,j])
• Edge Gradient Magnitude (M[i,j])
• Edge Gradient Direction (θ[i,j])
33
Gradient Calculation
H O R I Z O N T AL D E R I V AT I V E ( P [ I , J ] ) :
Capturing the rate of change of intensity along the horizontal axis, helping to identify regions in the
image where there are significant horizontal gradients or edges. This information is crucial for
subsequent steps in edge detection algorithms.
S(i,j): This is the result of applying a Gaussian filter to the image, representing the smoothed or
filtered intensity value at the pixel location (i,j).
P[i,j]: This term represents the first-order derivative finite difference in the horizontal direction at pixel
coordinates (i,j). The formula for P[i,j] involves subtracting the intensity values at adjacent positions
in the horizontal direction from the smoothed image. It's a measure of how much the intensity
changes from one pixel to the next in the horizontal dimension.
34
Gradient Calculation
V E R T I C AL D E R I V AT I V E ( Q [ I , J ] ) :
Capturing the rate of change of intensity along the vertical axis, helping to identify regions in the
image where there are significant vertical gradients or edges. This information is crucial for
subsequent steps in edge detection algorithms.
S(i,j): This is the result of applying a Gaussian filter to the image, representing the smoothed or
filtered intensity value at the pixel location (i,j).
Q[i,j]: This term represents the first-order derivative finite difference in the vertical direction at pixel
coordinates (i,j). The formula for Q[i,j] involves subtracting the intensity values at adjacent positions
in the vertical direction from the smoothed image. It's a measure of how much the intensity changes
from one pixel to the next in the vertical dimension.
35
Gradient Calculation
E D G E G R AD I E N T M AG N I T U D E ( M [ I , J ] )
The edge gradient magnitude is a fundamental quantity in edge detection algorithms. It
highlights areas of the image where there are significant changes in intensity, indicating the
presence of edges or boundaries between different regions. High gradient magnitudes suggest
strong edges, while low magnitudes suggest smoother or uniform regions. This formula
combines the horizontal and vertical gradient components using the Pythagorean theorem. It
results in a scalar value that represents the overall magnitude or strength of the intensity
gradient at the specified pixel.
36
Gradient Calculation
E D G E G R AD I E N T M AG N I T U D E ( M [ I , J ] )
The edge gradient magnitude at pixel coordinates (i,j) is computed using the formula :
P[i,j] and Q[i,j]: These are the horizontal and vertical derivatives, respectively, calculated using
the first-order derivative of the Gaussian function. They represent the rates of intensity change
in the horizontal and vertical directions, capturing the gradient components.
M[i,j]: The edge gradient magnitude at pixel coordinates (i,j). The magnitude is calculated using
the square root of the sum of the squares of the horizontal and vertical derivatives.
37
Gradient Calculation
E D G E G R AD I E N T D I R E C T I O N ( Θ [ I , J ] ) :
The edge gradient direction provides insights into the orientation of edges in the image. It
indicates the direction in which the intensity changes the most rapidly at a given pixel. Typically,
this information is used to identify the orientation of edges, which can be crucial in tasks such as
object recognition and computer vision.
θ[i,j] represents the direction of the edge gradient at pixel coordinates (i,j).
It is determined using the arctangent function based on the horizontal (P[i,j]) and vertical (Q[i,j])
derivatives.
38
Gradient Calculation
E D G E G R AD I E N T D I R E C T I O N ( Θ [ I , J ] ) :
The edge gradient direction at pixel coordinates (i,j) is computed using the arctangent function:
P[i,j] and Q[i,j]: These are the horizontal and vertical derivatives, respectively, obtained using
the first-order derivative of the Gaussian function. They represent the rates of intensity change
in the horizontal and vertical directions, determining the gradient components.
θ[i,j]: It is determined using the arctangent function based on the horizontal P[i,j] and vertical
Q[i,j] derivatives.
39
Edge Detection
Non-Maximum Suppression (NMS)
Edge Detection
N O N - M AX I M U M S U P P R E S S I O N ( N M S ) :
After calculating edge magnitudes and directions, NMS is applied to suppress non-maximum
points along the edges. The idea is to retain only the local maxima in the gradient direction and
discard the rest.
For each pixel location (i,j), the algorithm compares the edge magnitude M[i,j] with the
magnitudes of its two neighbors in the gradient direction. If M[i,j] is greater than both neighbors,
the pixel is retained; otherwise, it is suppressed.
The result of NMS is an edge map where only the local maxima along the edges are preserved.
This helps thin out the edges and provides a more accurate representation of the true edges in
the image.
41
Edge Detection
Double-Threshold Segmentation
Edge Detection
D O U B L E - T H R E S H O L D S E G M E N T AT I O N :
Double-Threshold Segmentation is a vital stage in the Canny Edge Detection algorithm,
designed to categorize edge points based on their strength and significance.
Two threshold values, τ1 and τ2 , are defined, where τ1 > τ2. These thresholds determine the
strength of edges. Pixels in the gradient magnitude map are classified into three categories
based on their magnitude:
• Strong Edges: M[i,j] ≥ τ1 - Considered highly probable edge points.
• Weak Edges: ≤M[i,j] < τ1 - Threshold for potential edge points.
• Non-Edges: M[i,j] < τ2 - Pixels unlikely to be part of an edge.
43
Edge Detection
D O U B L E - T H R E S H O L D S E G M E N T AT I O N :
Using the defined threshold values, a binary segmentation is performed on the gradient
magnitude map. Two binary edge maps, T1 and T2 are generated corresponding to τ1 and τ2,
respectively. Algorithm Steps:
• Identify strong and weak edges based on threshold values.
• Create binary segmentation maps (T1 and T2)
44
Edge Detection
Edge Tracking by Hysteresis
Edge Detection
E D G E T R AC K I N G B Y H Y S T E R E S I S :
Edge tracking by hysteresis is a crucial step in the Canny Edge Detection algorithm and is
employed to connect weak edges to strong edges, forming continuous and meaningful edges in
the final output.
Starting from a strong edge pixel, the algorithm traces along the weak edges in the
neighborhood. If a weak edge pixel is connected to a strong edge pixel, it is considered part of
the edge. This process continues, forming chains of connected weak edges. Weak edges are
included in the final edge map only if they are part of a connected path to a strong edge.
This hysteresis-based approach helps prevent fragmentation of edges and ensures that weak
edges are included if they are part of a larger, connected edge structure.
The final output of the Canny Edge Detection algorithm is an edge map where strong edges and
connected weak edges form continuous lines, outlining meaningful features in the image.
46
Edge Detection
Pseudo-Edge Removal
Edge Detection
P S E U D O - E D G E R E M O V AL :
Pseudo-edge removal is a step in image processing that aims to eliminate false or spurious
edges, known as pseudo-edges, which may appear in the edge detection results. These false
edges can be caused by various factors such as lighting conditions, shadows, or imaging
artifacts.
Pseudo-edges often arise due to issues like stellar aberration, parallax, stray light,
misalignment, and diffraction. These factors can introduce unwanted features in the edge
detection results.
Pseudo-edge removal involves analyzing the gradient direction of edges in relation to the
lighting direction.
The angle between the gradient vector of a detected edge and the direction of illumination is
considered. In the case of planetary imaging, this illumination direction may come from the Sun. 48
Edge Detection
P S E U D O - E D G E R E M O V AL :
A criterion is established to distinguish between real and fake edges based on the angle between the
gradient vector (g) of a planet's real edges and the sun lighting direction (n).
The condition typically involves checking whether the dot product of the gradient vector and the
lighting direction is less than zero: g ⋅ n / ∣g∣ ∣n∣ < 0.
If the condition is satisfied, it indicates a real edge, and if not, the detected edge may be considered
a pseudo-edge.
Pseudo-edges failing the angle constraint are eliminated or marked as non-contributory.
This process helps ensure that only edges aligned with the lighting conditions and representing real
features are retained in the final results.
Removing pseudo-edges is crucial for accurate extraction of navigation information, such as the
determination of the apparent diameter, centroid, and other relevant parameters.
49
Limb Profile Fitting
Least Squares-Based Ellipse Fitting
Limb Profile Fitting
L E AS T S Q U AR E S - B A S E D E L L I P S E F I T T I N G :
Limb Profile Fitting: Celestial bodies often appear as ellipses due to the perspective projection onto
an image plane. Limb profile fitting is used to accurately determine the boundary (limb) of these
bodies. This is crucial for extracting geometric information such as the size, orientation, and
position of the celestial body within the image.
Least-Squares-Based Ellipse Fitting: This technique is employed to find the best-fitting ellipse to a
set of data points representing the limb profile. By minimizing the sum of squared differences
between the observed limb profile and the modeled ellipse, it provides an accurate representation
of the body's shape.
The combination of limb profile fitting and least-squares-based ellipse fitting in autonomous optical
navigation enables spacecraft to extract accurate geometric information from images of celestial
bodies, contributing to precise navigation, shape analysis, and object recognition in planetary
science missions.
51
Limb Profile Fitting
L E AS T S Q U AR E S - B A S E D E L L I P S E F I T T I N G :
The objective is to fit an ellipse to a set of data points [xi,yi] obtained from edge detection:
• Implicit Quadratic Equation for Conic Sections
• Optimization Problem - Minimization of Residuals
• Deriving First-Order Partial Derivatives
• Linear Equations and Coefficient Matrix
• Gaussian Elimination Method
• Validity Check using Ellipse Inequality Constraint
52
Limb Profile Fitting
I M P L I C I T Q U AD R A T I C E Q U AT I O N F O R C O N I C S E C T I O N S
An ellipse is formed if the coefficients A,B,C satisfy the condition 4AC−B2 > 0.
The equation:
Represents a general form of a quadratic equation in two variables xi and yi , where A,B,C,D,E,F are coefficients.
𝐴𝑥𝑖
2
: Represents the quadratic term involving the square of the variable xi.
𝐵𝑥𝑖𝑦𝑖: Represents the cross-term involving the product of xi and yi.
𝐶𝑦𝑖
2
: Represents the quadratic term involving the square of the variable yi.
𝐷𝑥𝑖: Represents the linear term involving xi.
𝐸𝑦𝑖: Represents the linear term involving yi.
𝐹: Represents the constant term.
53
Limb Profile Fitting
O P T I M I Z AT I O N P R O B L E M - M I N I M I Z AT I O N O F R E S I D U AL S :
The expression represents the sum of squared residuals:
S(a): is the objective function that you want to minimize.
a=[A,B,C,D,E,F]T: is a vector of parameters that define an ellipse in the implicit quadratic
equation.
f(a,xi) is the residual or error term for the i-th data point (xi,yi). It's the difference between the
observed value F(xi,yi) and the value predicted by the model f(a,xi).
54
Limb Profile Fitting
O P T I M I Z AT I O N P R O B L E M - M I N I M I Z AT I O N O F R E S I D U AL S :
The optimization problem is formulated as:
The goal of the least-squares-based ellipse fitting is to find the values of the parameters
A,B,C,D,E,F that minimize the sum of squared residuals. In other words, you want to find the
ellipse parameters that make the model F(xi,y ) as close as possible to the actual data points
(xi,yi).
55
Limb Profile Fitting
D E R I V I N G F I R S T - O R D E R P AR T I A L D E R I V AT I V E S :
To derive the first-order partial derivatives, you need to differentiate S(a) with respect to each
parameter A,B,C,D,E,F. Let's denote the partial derivatives as follows:
56
Limb Profile Fitting
D E R I V I N G F I R S T - O R D E R P AR T I A L D E R I V AT I V E S :
Then the parameters of the ellipse can be determined when the objective function has a
minimum:
The first order partial derivatives (
𝜕𝑓
𝜕𝐴
)can be rewritten as follows:
57
Limb Profile Fitting
D E R I V I N G F I R S T - O R D E R P AR T I A L D E R I V AT I V E S :
Similarly, we have:
Repeat this process for each parameter to obtain the complete set of first-order partial
derivatives.
58
Limb Profile Fitting
L I N E AR E Q U AT I O N S AN D C O E F F I C I E N T M AT R I X :
After deriving the first-order partial derivatives, the system of linear equations is formed to
solve for the parameters of the ellipse. The linear equations can be represented in matrix form
as follows:
M: is the coefficient matrix
a=[A,B,C,D,E,F]T: is the vector of ellipse parameters.
T: Denotes the transpose of a matrix.
59
Limb Profile Fitting
L I N E AR E Q U AT I O N S AN D C O E F F I C I E N T M AT R I X :
Linear equations can be rewritten as follows:
60
Limb Profile Fitting
G AU S S I A N E L I M I N AT I O N M E T H O D
61
Limb Profile Fitting
V AL I D I T Y C H E C K U S I N G E L L I P S E I N E Q U AL I T Y C O N S T R AI N T
62
Limb Profile Fitting
Levenberg-Marquardt-Based Ellipse Fitting
Limb Profile Fitting
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
Levenberg-Marquardt (LM) is an optimization algorithm used for solving nonlinear least
squares problems. In the context of ellipse fitting, LM can be applied to iteratively refine the
parameters of an ellipse model in order to minimize the difference between the observed data
points and the predicted ellipse.
As before, the goal is to optimize the parameters a of the model curve f(a,xi) so that
the sum of the squares of the deviations becomes minimal:
64
Limb Profile Fitting
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
To start a minimization, an initial guess for the parameter vector a=[A,B,C,D,E,F]T is required.
In each iteration step, the parameter vector a is then replaced by a new estimate a+δ. To
determine the value of δ, the functions f(a+δ,xi) are approximated by their linearization:
f(a): is the function value at the current estimate of the parameters a.
Ji: is the Jacobian matrix evaluated at the current estimate, representing the partial derivatives
of f with respect to the parameters.
65
Limb Profile Fitting
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
At the minimum of the sum of squares S(a), the gradient of S with respect to δ should be zero. The
above first-order approximation of f(a+δ,xi) gives:
S(a+δ): is the objective function at the updated parameter values a+δ.
𝛴ⅈ=1
𝑛
[ f(a,xi) + Ji δ ] 2: represents the current value of the objective function at the current parameter
values a.
Ji: is the Jacobian matrix, representing the partial derivatives of f with respect to the parameters,
evaluated at the current estimate a.
δ: is a vector that represents the correction or perturbation to the parameters.
66
Limb Profile Fitting
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
or in vector notation:
S(a+δ): is the objective function at the updated parameter values a+δ.
f(a): is the function value at the current estimate of the parameters a.
J is the Hessian matrix or the Jacobian matrix, representing the second-order partial
derivatives of S with respect to the parameters, evaluated at the current estimate a.
δ: is a vector that represents the correction or perturbation to the parameters.
67
Limb Profile Fitting
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
Taking the derivative with respect to δ and setting it to zero:
gives:
where J is the Jacobian matrix whose i-th row equals Ji, and where f(a) is the vector with i-th
component f(a,xi).
68
Limb Profile Fitting
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
In order to control the convergence rate and accuracy, a damping factor λ is introduced here.
can be transformed as:
To avoid the defect of slow convergence in the direction of a small gradient, the identity matrix
I is replaced with the diagonal matrix consisting of the diagonal elements diag(JTJ)
69
Limb Profile Fitting
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
All that is left to do is to solve the above linear equations, and then get the value of δ and the
new ellipse parameter estimated value a+δ. With repeatedly iterated calculations, we can
obtain the optimal estimate of afinal= (A,B,C,D,E,F)T If these parameters meet the constraint
condition 4AC−B2 > 0, then we can conclude that afinal= (A,B,C,D,E,F)T is the optimal
estimated ellipse parameters.
70
Centroid Extraction
Centroid Extraction
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
The goal is to extract the centroid of a celestial body, specifically a planet, from an image. This
is achieved through edge detection and ellipse fitting methods.
x0, y0: Coordinates of the ellipse center.
72
Centroid Extraction
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
a: Semi-major axis.
b: Semi-minor axis.
73
Centroid Extraction
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
ϕ: Inclined angle from the x-axis to the ellipse major axis.
74
Simulation Results And
Analysis
Simulation Results And Analysis
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
Testing and analysis of an image processing algorithm developed for deep-space autonomous
optical navigation. The algorithm is tested using both real images from the MESSENGER
mission and synthetic simulated images.
76
Simulation Results And Analysis
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
Display the results of the image
processing algorithm applied to real
images containing Mercury and
Venus.
Fitted ellipses (red and blue outlines)
are superimposed on the raw images.
Both ellipse fitting algorithms
accurately acquire the limb profiles of
Mercury and Venus.
Pseudo-edge data points in the interior
of the planet limb are eliminated by
threshold segmentation before Canny
edge detection.
77
Simulation Results And Analysis
L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G :
Display the results of the image processing algorithm applied to real images containing Mercury and Venus.
Fitted ellipses (red and blue outlines) are superimposed on the raw images.
Both ellipse fitting algorithms accurately acquire the limb profiles of Mercury and Venus.
Pseudo-edge data points in the interior of the planet limb are eliminated by threshold segmentation before Canny edge detection.
78
Simulation Results And Analysis
79
Simulation Results And Analysis
80
Simulation Results And Analysis
81
Simulation Results And Analysis
82
Conclusion
In conclusion, the development of robust Autonomous Optical Navigation
(OPNAV) capabilities is imperative for the success of future deep-space
exploration missions, whether robotic or manned. This study introduces novel
image processing algorithms, specifically the Least Squares-based and
Levenberg-Marquardt-based ellipse fitting methods, designed to meet the
demanding requirements of autonomous OPNAV. Through comprehensive
testing on both real flight images from the MESSENGER mission and simulated
scenarios, the proposed algorithms demonstrate high accuracy. Maximum
ellipse fitting errors are consistently reported to be below three pixels, and the
error of the line-of-sight vector to the object center remains less than 1.67×10
−4rad. Looking forward, the research suggests a future focus on an end-to-end
assessment of the complete autonomous OPNAV algorithm, integrating the
developed image processing techniques with a representative dynamics model
and navigation filter algorithm. This holistic approach aims to further enhance
the performance and reliability of autonomous space exploration systems,
marking a significant stride in advancing our capabilities for deep-space 83
Thank You
Presenter Name: Ahmad AL-Zahran
84

More Related Content

PDF
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
PDF
Visual Odometry using Stereo Vision
PDF
The flow of baseline estimation using a single omnidirectional camera
PDF
Report bep thomas_blanken
PDF
Robot Pose Estimation: A Vertical Stereo Pair Versus a Horizontal One
PDF
Application of Vision based Techniques for Position Estimation
PDF
ShawnQuinnCSS565FinalResearchProject
PDF
Satellite Imaging System
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Visual Odometry using Stereo Vision
The flow of baseline estimation using a single omnidirectional camera
Report bep thomas_blanken
Robot Pose Estimation: A Vertical Stereo Pair Versus a Horizontal One
Application of Vision based Techniques for Position Estimation
ShawnQuinnCSS565FinalResearchProject
Satellite Imaging System

Similar to Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx (20)

PDF
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
PDF
The Technology Research of Camera Calibration Based On LabVIEW
PDF
Final Paper
PDF
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
PDF
Aerial photogrammetry chapter pptx-1.pdf
PDF
201607__EvaluationoftheperformanceofKOMPSAT-3stereoimagesintermsofpoistioning...
PDF
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
PDF
D04432528
PPT
A Study on the Development of High Accuracy Solar Tracking Systems
PDF
Fisheye Omnidirectional View in Autonomous Driving
PDF
Three-dimensional structure from motion recovery of a moving object with nois...
PDF
ess-autonomousnavigation-ijrr10final.pdf
PDF
STUDY ON THE PATH TRACKING AND POSITIONING METHOD OF WHEELED MOBILE ROBOT
PDF
2013APRU_NO40-abstract-mobilePIV_YangYaoYu
PDF
An Assessment of Image Matching Algorithms in Depth Estimation
PDF
Object Distance Detection using a Joint Transform Correlator
PDF
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
PDF
Review_of_phase_measuring_deflectometry.pdf
PPT
Photogrammetry1
PDF
Photogrammetry.pdf
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
The Technology Research of Camera Calibration Based On LabVIEW
Final Paper
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Aerial photogrammetry chapter pptx-1.pdf
201607__EvaluationoftheperformanceofKOMPSAT-3stereoimagesintermsofpoistioning...
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
D04432528
A Study on the Development of High Accuracy Solar Tracking Systems
Fisheye Omnidirectional View in Autonomous Driving
Three-dimensional structure from motion recovery of a moving object with nois...
ess-autonomousnavigation-ijrr10final.pdf
STUDY ON THE PATH TRACKING AND POSITIONING METHOD OF WHEELED MOBILE ROBOT
2013APRU_NO40-abstract-mobilePIV_YangYaoYu
An Assessment of Image Matching Algorithms in Depth Estimation
Object Distance Detection using a Joint Transform Correlator
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
Review_of_phase_measuring_deflectometry.pdf
Photogrammetry1
Photogrammetry.pdf
Ad

Recently uploaded (20)

PDF
Encapsulation theory and applications.pdf
PDF
cuic standard and advanced reporting.pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPT
Teaching material agriculture food technology
PDF
Machine learning based COVID-19 study performance prediction
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
sap open course for s4hana steps from ECC to s4
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Approach and Philosophy of On baking technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Encapsulation theory and applications.pdf
cuic standard and advanced reporting.pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Diabetes mellitus diagnosis method based random forest with bat algorithm
Digital-Transformation-Roadmap-for-Companies.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Teaching material agriculture food technology
Machine learning based COVID-19 study performance prediction
Building Integrated photovoltaic BIPV_UPV.pdf
Review of recent advances in non-invasive hemoglobin estimation
Network Security Unit 5.pdf for BCA BBA.
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
“AI and Expert System Decision Support & Business Intelligence Systems”
MIND Revenue Release Quarter 2 2025 Press Release
sap open course for s4hana steps from ECC to s4
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Approach and Philosophy of On baking technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Ad

Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx

  • 1. Image Processing Algorithms For Deep-Space Autonomous Optical Navigation
  • 2. Agenda • Introduction • Optical Navigation and Image Processing • Role of Image Processing in Autonomous Optical Navigation • Fundamentals Of Autonomous Optical Navigation • Optical Navigation Measurement Model • Pinhole Camera Model • Line-of-Sight Unit Vector • Imaging Error Sources • Image Processing Procedure 2 • Edge Detection • Canny Edge Detection • Non-Maximum Suppression (NMS) • Double-Threshold Segmentation • Edge Tracking by Hysteresis • Pseudo-Edge Removal • Limb Profile Fitting • Least Squares-Based Ellipse Fitting • Levenberg-Marquardt-Based Ellipse Fitting • Centroid Extraction • Simulation Results And Analysis
  • 3. Introduction Historically, spacecraft navigation relied heavily on Earth-based Deep Space Network (DSN) range and range-rate measurements. While effective, this method had drawbacks such as high costs and a low update frequency due to the limited availability of tracking stations on Earth. To address these limitations, a new solution known as autonomous optical navigation was developed. This refers to a spacecraft's ability to navigate independently of Earth-based communication systems by using optical sensors and image processing algorithms. This method improves the spacecraft's autonomy and is especially useful during phases where traditional radio tracking navigation is limited or when the system fails. 3
  • 4. Optical Navigation And Image Processing Role of Image Processing in Autonomous Optical Navigation
  • 5. Optical Navigation And Image Processing R O L E O F I M AG E P R O C E S S I N G I N AU TO N O M O U S O P T I C AL N AV I G AT I O N : Image processing play an extremely important part in autonomous optical navigation, serving as the technological backbone that allows spacecraft to interpret and extract important navigation information from captured images. Image processing algorithms are critical in extracting essential navigational observables from raw images, such as the line-of-sight vector, apparent diameter, and centroid. These algorithms aim to overcome the difficulties posed by spacecraft's limited computational capacity and the unique environmental conditions of deep space. Image processing enables spacecraft to autonomously analyze visual data, make informed navigational decisions, and respond to dynamic changes in their surroundings by improving the efficiency and accuracy of data extraction. The implementation of advanced image processing techniques not only improves navigation accuracy but also allows for real-time decision-making. 5
  • 7. Optical Navigation Measurement Model O P T I C A L N AV I G AT I O N M E A S U R E M E N T : The optical navigation measurement model refers to the mathematical representation or framework used to describe and quantify the measurements obtained from optical sensors for navigation purposes. Optical navigation involves utilizing information from visual cues or features in the environment to determine the position, orientation, or movement of a vehicle, such as a spacecraft, drone, or autonomous vehicle. The measurement model typically includes various parameters and equations that relate the observed optical measurements to the actual state of the system being navigated. This model is crucial for understanding how the optical sensor data can be interpreted and integrated into a navigation system to estimate the system's pose (position and orientation). 7
  • 9. Optical Navigation Measurement Model P I N H O L E C A M E R A M O D E L : The pinhole camera model provides a conceptual framework for understanding how a spacecraft's camera captures, projects, and transforms light from distant objects in space into digital data. The use of this model makes it easier to extract navigational information for autonomous spacecraft navigation, which contributes to the success of space missions. The spacecraft, equipped with an optical navigation camera, captures a raw image of a celestial body (e.g., a planet). The captured image is processed using the pinhole camera model. This model transforms the 3D coordinates (X, Y, Z) of points in the space to 2D coordinates (u, v) on the image plane, considering the focal length of the camera and the orientation of its axes. 9
  • 10. Optical Navigation Measurement Model P I N H O L E C A M E R A M O D E L : By applying the pinhole camera model, you use the captured pixel coordinates (u,v), along with the camera's focal length (f), to estimate the 3D coordinates (X,Y,Z) of the scene points. The equations involved are: The values of X,Y,Z change dynamically depending on the captured image, the motion of the spacecraft, and the use of the pinhole camera model. Depth information is critical for accurate 3D reconstruction, and the final coordinates may be transformed further to align with a particular coordinate system. 10
  • 12. Optical Navigation Measurement Model L I N E - O F - S I G H T U N I T V E C TO R : Line-of-Sight (LOS) vector represents the direction from the spacecraft to an observed object. It is a unit vector that points from the camera location to the point on the observed object corresponding to the center of the camera's field of view. The LOS vector provides information about the orientation and direction of the observed object relative to the spacecraft. The pinhole camera model establishes a straightforward relationship between a point on the detector plane (where the image is formed) and the corresponding line-of-sight unit vector. This relationship involves the focal length of the camera, the coordinates of the point on the detector plane (usually denoted as u and v), and some geometric parameters. 12
  • 13. Optical Navigation Measurement Model L I N E - O F - S I G H T U N I T V E C TO R : The mathematical representation of this relationship is given by the formula: • 𝐞𝐢 𝐜 is the line-of-sight unit vector. • u and v are the coordinates of the point on the detector plane. • f is the focal length of the camera. 13
  • 14. Optical Navigation Measurement Model L I N E - O F - S I G H T U N I T V E C TO R : Once this vector is obtained, it is typically necessary to rotate it from the camera frame to the inertial frame. This rotation is achieved using transformation matrices, which are mathematical tools that describe how vectors or coordinates in one coordinate system can be transformed into another. Camera Frame: The camera frame is a reference frame attached to the spacecraft's imaging system or camera. It is a local coordinate system centered on the focal point of the camera, and its orientation is defined by the intrinsic parameters of the camera, such as its optical axis and principal point. 𝐈𝐧𝐞𝐫𝐭𝐢𝐚𝐥 𝐅𝐫𝐚𝐦𝐞: The inertial frame is a fixed reference frame in space that does not change regardless of the movement of the spacecraft. It serves as an external reference to indicate objects' absolute orientation and position in space. 14
  • 15. Optical Navigation Measurement Model L I N E - O F - S I G H T U N I T V E C TO R ( R OTAT I O N O P E R AT I O N ) : 𝑻𝑩 𝑰 : This is the transformation matrix between the body frame and the inertial frame. It explains how to transform vectors or coordinates in the body frame (the frame attached to the spacecraft) into the inertial frame (a fixed space reference frame). 𝑻𝑪 𝑩 : This is the transformation matrix between the camera frame and the body frame. It describes how to transform vectors or coordinates in the camera frame into the body frame. 𝒆𝒊 𝒄 : This is the line-of-sight vector in the camera frame, representing the direction from the camera to a specific point in space. 𝒆𝒊 𝑰 : This is the rotated line-of-sight vector in the inertial frame, providing information about the orientation and relative position of objects in space with respect to the spacecraft. 15
  • 17. Optical Navigation Measurement Model I M A G I N G E R R O R S O U R C E S : The pinhole camera model does not include imaging error sources such as stellar aberration, parallax, stray light, misalignment, and diffraction. To create a more realistic mathematical model, both external and internal error sources must be considered. External Errors: These are factors external to the camera system, like the movement of stars or the influence of other light sources. A realistic model must consider how these factors affect the observed images. Internal Errors: These are errors originating within the camera system itself, such as inaccuracies in the camera's internal geometry, lens imperfections, or electronic noise. A comprehensive model should incorporate these internal error sources for accuracy. 17
  • 18. Optical Navigation Measurement Model I M A G I N G E R R O R S O U R C E S : Stellar Aberration: The model doesn't account for the apparent shift in star positions due to the motion of the spacecraft. Parallax: It doesn't consider the change in the apparent position of celestial objects when viewed from different points in space. Stray Light: External light entering the camera system unintentionally is not considered. Misalignment: Deviations in the alignment of the camera components are not factored in. Diffraction: Effects of light bending around obstacles or edges are ignored. 18
  • 20. Fundamentals Of Autonomous Optical Navigation I M AG E P R O C E S S I N G P R O C E D U R E : image processing procedure involves preparing the raw image, identifying genuine edge data points, fitting ellipses to define the celestial object's limb, and computing the centroid for extracting vital navigation measurements. The use of specific algorithms and techniques at each step ensures the accuracy and reliability of the autonomous optical navigation system. 20
  • 22. Image Processing Procedure S T E P - 1 G R E Y I M AG E : The spacecraft, equipped with an optical navigation camera, captures a raw image of a celestial body (e.g., a planet). 22
  • 23. Image Processing Procedure S T E P - 2 I M AG E P R E - P R O C E S S I N G : This initial step involves techniques to enhance the raw image and prepare it for subsequent processing. Operations like threshold segmentation, image smoothing, sharpening, and correction are applied to simplify the image, reduce noise, and highlight important features. 23
  • 24. Image Processing Procedure S T E P - 3 C AN N Y E D G E D E T E C T I O N : Edge detection is crucial for identifying significant features in the image. The Canny Edge Detection algorithm is often used, which involves smoothing the image with a Gaussian filter, calculating the gradient magnitude and direction, and applying non-maximum suppression to identify potential edges. 24
  • 25. Image Processing Procedure S T E P - 4 P S E U D O - E D G E R E M O VAL : Due to factors such as the sun's azimuth and camera viewing angles, pseudo-edges (false edges) may be present in the edge detection results. These are typically caused by backlight- shaded areas. Pseudo-edge removal involves calculating the angle between the lighting direction and the gradient direction of edges to distinguish real edges from false edges. 25
  • 26. Image Processing Procedure S T E P - 5 L E AS T S Q U AR E S B AS E D E L L I P S E - F I T T I N G AL G O R I T H M : The perspective projection of a celestial object, like a planet, onto the image plane forms an ellipse. Limb profile fitting involves fitting an ellipse to the candidate edge points identified in the image. This step is essential for determining the apparent shape and orientation of the observed celestial object. 26
  • 27. Image Processing Procedure S T E P - 6 L E V E N B E R G - M AR Q U A R D T B AS E D E L L I P S E - F I T T I N G AL G O R I T H M : In some cases, particularly when observing planets, fitting an ellipsoid or sphere to the detected limb profile can provide a more accurate representation of the object's shape. 27
  • 28. Image Processing Procedure S T E P - 7 C E N T R O I D C O M P U TAT I O N : The final step involves computing the centroid of the fitted limb profile or shape. The centroid provides a representative point that can be used for navigation measurements. 28
  • 30. Edge Detection C A N N Y E D G E D E T E C T I O N : Canny Edge Detection is a popular image processing technique used to identify edges within an image. Developed by John F. Canny in 1986, this method is widely employed due to its ability to accurately detect edges while minimizing false positives. Edge detection is a fundamental step in image analysis, computer vision, and object recognition. The Canny Edge Detection algorithm involves several key steps: • Smoothing (Gaussian Filtering) • Gradient Calculation • Non-Maximum Suppression (NMS) • Double-Threshold Segmentation • Edge Tracking by Hysteresis 30
  • 31. Canny Edge Detection S M O OT H I N G ( G AU S S I A N F I LT E R I N G ) : Before detecting edges, the image is convolved with a Gaussian filter. This step reduces noise and ensures that the subsequent edge detection is less sensitive to minor variations. two- dimensional Gaussian function used in image processing: G (i,j): the variables i and j represent the pixel coordinates in a two-dimensional image. e: Euler's number, approximately equal to 2.71828, is the base of the natural logarithm. σ (sigma): is a parameter known as the standard deviation, which controls the width of the Gaussian function. A larger σ results in a wider and smoother distribution. 31
  • 32. Canny Edge Detection S M O OT H I N G ( G AU S S I A N F I LT E R I N G ) : S (i,j): This is the result of the convolution operation and represents the smoothed or filtered intensity value at the pixel location (i,j) in the image. G (i,j): This is the value of the Gaussian function at the pixel coordinates (i,j). The Gaussian function is used for image smoothing, and its values are determined by the spatial distribution of the function centered around the pixel. I (i,j): This represents the intensity values of the raw image at the pixel location (i,j). These are the original pixel values of the image. 32
  • 33. Canny Edge Detection G R A D I E N T C A LC U L AT I O N : Gradient Calculation step in the Canny Edge Detection algorithm: • Horizontal Derivative (P[i,j]) • Vertical Derivative (Q[i,j]) • Edge Gradient Magnitude (M[i,j]) • Edge Gradient Direction (θ[i,j]) 33
  • 34. Gradient Calculation H O R I Z O N T AL D E R I V AT I V E ( P [ I , J ] ) : Capturing the rate of change of intensity along the horizontal axis, helping to identify regions in the image where there are significant horizontal gradients or edges. This information is crucial for subsequent steps in edge detection algorithms. S(i,j): This is the result of applying a Gaussian filter to the image, representing the smoothed or filtered intensity value at the pixel location (i,j). P[i,j]: This term represents the first-order derivative finite difference in the horizontal direction at pixel coordinates (i,j). The formula for P[i,j] involves subtracting the intensity values at adjacent positions in the horizontal direction from the smoothed image. It's a measure of how much the intensity changes from one pixel to the next in the horizontal dimension. 34
  • 35. Gradient Calculation V E R T I C AL D E R I V AT I V E ( Q [ I , J ] ) : Capturing the rate of change of intensity along the vertical axis, helping to identify regions in the image where there are significant vertical gradients or edges. This information is crucial for subsequent steps in edge detection algorithms. S(i,j): This is the result of applying a Gaussian filter to the image, representing the smoothed or filtered intensity value at the pixel location (i,j). Q[i,j]: This term represents the first-order derivative finite difference in the vertical direction at pixel coordinates (i,j). The formula for Q[i,j] involves subtracting the intensity values at adjacent positions in the vertical direction from the smoothed image. It's a measure of how much the intensity changes from one pixel to the next in the vertical dimension. 35
  • 36. Gradient Calculation E D G E G R AD I E N T M AG N I T U D E ( M [ I , J ] ) The edge gradient magnitude is a fundamental quantity in edge detection algorithms. It highlights areas of the image where there are significant changes in intensity, indicating the presence of edges or boundaries between different regions. High gradient magnitudes suggest strong edges, while low magnitudes suggest smoother or uniform regions. This formula combines the horizontal and vertical gradient components using the Pythagorean theorem. It results in a scalar value that represents the overall magnitude or strength of the intensity gradient at the specified pixel. 36
  • 37. Gradient Calculation E D G E G R AD I E N T M AG N I T U D E ( M [ I , J ] ) The edge gradient magnitude at pixel coordinates (i,j) is computed using the formula : P[i,j] and Q[i,j]: These are the horizontal and vertical derivatives, respectively, calculated using the first-order derivative of the Gaussian function. They represent the rates of intensity change in the horizontal and vertical directions, capturing the gradient components. M[i,j]: The edge gradient magnitude at pixel coordinates (i,j). The magnitude is calculated using the square root of the sum of the squares of the horizontal and vertical derivatives. 37
  • 38. Gradient Calculation E D G E G R AD I E N T D I R E C T I O N ( Θ [ I , J ] ) : The edge gradient direction provides insights into the orientation of edges in the image. It indicates the direction in which the intensity changes the most rapidly at a given pixel. Typically, this information is used to identify the orientation of edges, which can be crucial in tasks such as object recognition and computer vision. θ[i,j] represents the direction of the edge gradient at pixel coordinates (i,j). It is determined using the arctangent function based on the horizontal (P[i,j]) and vertical (Q[i,j]) derivatives. 38
  • 39. Gradient Calculation E D G E G R AD I E N T D I R E C T I O N ( Θ [ I , J ] ) : The edge gradient direction at pixel coordinates (i,j) is computed using the arctangent function: P[i,j] and Q[i,j]: These are the horizontal and vertical derivatives, respectively, obtained using the first-order derivative of the Gaussian function. They represent the rates of intensity change in the horizontal and vertical directions, determining the gradient components. θ[i,j]: It is determined using the arctangent function based on the horizontal P[i,j] and vertical Q[i,j] derivatives. 39
  • 41. Edge Detection N O N - M AX I M U M S U P P R E S S I O N ( N M S ) : After calculating edge magnitudes and directions, NMS is applied to suppress non-maximum points along the edges. The idea is to retain only the local maxima in the gradient direction and discard the rest. For each pixel location (i,j), the algorithm compares the edge magnitude M[i,j] with the magnitudes of its two neighbors in the gradient direction. If M[i,j] is greater than both neighbors, the pixel is retained; otherwise, it is suppressed. The result of NMS is an edge map where only the local maxima along the edges are preserved. This helps thin out the edges and provides a more accurate representation of the true edges in the image. 41
  • 43. Edge Detection D O U B L E - T H R E S H O L D S E G M E N T AT I O N : Double-Threshold Segmentation is a vital stage in the Canny Edge Detection algorithm, designed to categorize edge points based on their strength and significance. Two threshold values, τ1 and τ2 , are defined, where τ1 > τ2. These thresholds determine the strength of edges. Pixels in the gradient magnitude map are classified into three categories based on their magnitude: • Strong Edges: M[i,j] ≥ τ1 - Considered highly probable edge points. • Weak Edges: ≤M[i,j] < τ1 - Threshold for potential edge points. • Non-Edges: M[i,j] < τ2 - Pixels unlikely to be part of an edge. 43
  • 44. Edge Detection D O U B L E - T H R E S H O L D S E G M E N T AT I O N : Using the defined threshold values, a binary segmentation is performed on the gradient magnitude map. Two binary edge maps, T1 and T2 are generated corresponding to τ1 and τ2, respectively. Algorithm Steps: • Identify strong and weak edges based on threshold values. • Create binary segmentation maps (T1 and T2) 44
  • 46. Edge Detection E D G E T R AC K I N G B Y H Y S T E R E S I S : Edge tracking by hysteresis is a crucial step in the Canny Edge Detection algorithm and is employed to connect weak edges to strong edges, forming continuous and meaningful edges in the final output. Starting from a strong edge pixel, the algorithm traces along the weak edges in the neighborhood. If a weak edge pixel is connected to a strong edge pixel, it is considered part of the edge. This process continues, forming chains of connected weak edges. Weak edges are included in the final edge map only if they are part of a connected path to a strong edge. This hysteresis-based approach helps prevent fragmentation of edges and ensures that weak edges are included if they are part of a larger, connected edge structure. The final output of the Canny Edge Detection algorithm is an edge map where strong edges and connected weak edges form continuous lines, outlining meaningful features in the image. 46
  • 48. Edge Detection P S E U D O - E D G E R E M O V AL : Pseudo-edge removal is a step in image processing that aims to eliminate false or spurious edges, known as pseudo-edges, which may appear in the edge detection results. These false edges can be caused by various factors such as lighting conditions, shadows, or imaging artifacts. Pseudo-edges often arise due to issues like stellar aberration, parallax, stray light, misalignment, and diffraction. These factors can introduce unwanted features in the edge detection results. Pseudo-edge removal involves analyzing the gradient direction of edges in relation to the lighting direction. The angle between the gradient vector of a detected edge and the direction of illumination is considered. In the case of planetary imaging, this illumination direction may come from the Sun. 48
  • 49. Edge Detection P S E U D O - E D G E R E M O V AL : A criterion is established to distinguish between real and fake edges based on the angle between the gradient vector (g) of a planet's real edges and the sun lighting direction (n). The condition typically involves checking whether the dot product of the gradient vector and the lighting direction is less than zero: g ⋅ n / ∣g∣ ∣n∣ < 0. If the condition is satisfied, it indicates a real edge, and if not, the detected edge may be considered a pseudo-edge. Pseudo-edges failing the angle constraint are eliminated or marked as non-contributory. This process helps ensure that only edges aligned with the lighting conditions and representing real features are retained in the final results. Removing pseudo-edges is crucial for accurate extraction of navigation information, such as the determination of the apparent diameter, centroid, and other relevant parameters. 49
  • 50. Limb Profile Fitting Least Squares-Based Ellipse Fitting
  • 51. Limb Profile Fitting L E AS T S Q U AR E S - B A S E D E L L I P S E F I T T I N G : Limb Profile Fitting: Celestial bodies often appear as ellipses due to the perspective projection onto an image plane. Limb profile fitting is used to accurately determine the boundary (limb) of these bodies. This is crucial for extracting geometric information such as the size, orientation, and position of the celestial body within the image. Least-Squares-Based Ellipse Fitting: This technique is employed to find the best-fitting ellipse to a set of data points representing the limb profile. By minimizing the sum of squared differences between the observed limb profile and the modeled ellipse, it provides an accurate representation of the body's shape. The combination of limb profile fitting and least-squares-based ellipse fitting in autonomous optical navigation enables spacecraft to extract accurate geometric information from images of celestial bodies, contributing to precise navigation, shape analysis, and object recognition in planetary science missions. 51
  • 52. Limb Profile Fitting L E AS T S Q U AR E S - B A S E D E L L I P S E F I T T I N G : The objective is to fit an ellipse to a set of data points [xi,yi] obtained from edge detection: • Implicit Quadratic Equation for Conic Sections • Optimization Problem - Minimization of Residuals • Deriving First-Order Partial Derivatives • Linear Equations and Coefficient Matrix • Gaussian Elimination Method • Validity Check using Ellipse Inequality Constraint 52
  • 53. Limb Profile Fitting I M P L I C I T Q U AD R A T I C E Q U AT I O N F O R C O N I C S E C T I O N S An ellipse is formed if the coefficients A,B,C satisfy the condition 4AC−B2 > 0. The equation: Represents a general form of a quadratic equation in two variables xi and yi , where A,B,C,D,E,F are coefficients. 𝐴𝑥𝑖 2 : Represents the quadratic term involving the square of the variable xi. 𝐵𝑥𝑖𝑦𝑖: Represents the cross-term involving the product of xi and yi. 𝐶𝑦𝑖 2 : Represents the quadratic term involving the square of the variable yi. 𝐷𝑥𝑖: Represents the linear term involving xi. 𝐸𝑦𝑖: Represents the linear term involving yi. 𝐹: Represents the constant term. 53
  • 54. Limb Profile Fitting O P T I M I Z AT I O N P R O B L E M - M I N I M I Z AT I O N O F R E S I D U AL S : The expression represents the sum of squared residuals: S(a): is the objective function that you want to minimize. a=[A,B,C,D,E,F]T: is a vector of parameters that define an ellipse in the implicit quadratic equation. f(a,xi) is the residual or error term for the i-th data point (xi,yi). It's the difference between the observed value F(xi,yi) and the value predicted by the model f(a,xi). 54
  • 55. Limb Profile Fitting O P T I M I Z AT I O N P R O B L E M - M I N I M I Z AT I O N O F R E S I D U AL S : The optimization problem is formulated as: The goal of the least-squares-based ellipse fitting is to find the values of the parameters A,B,C,D,E,F that minimize the sum of squared residuals. In other words, you want to find the ellipse parameters that make the model F(xi,y ) as close as possible to the actual data points (xi,yi). 55
  • 56. Limb Profile Fitting D E R I V I N G F I R S T - O R D E R P AR T I A L D E R I V AT I V E S : To derive the first-order partial derivatives, you need to differentiate S(a) with respect to each parameter A,B,C,D,E,F. Let's denote the partial derivatives as follows: 56
  • 57. Limb Profile Fitting D E R I V I N G F I R S T - O R D E R P AR T I A L D E R I V AT I V E S : Then the parameters of the ellipse can be determined when the objective function has a minimum: The first order partial derivatives ( 𝜕𝑓 𝜕𝐴 )can be rewritten as follows: 57
  • 58. Limb Profile Fitting D E R I V I N G F I R S T - O R D E R P AR T I A L D E R I V AT I V E S : Similarly, we have: Repeat this process for each parameter to obtain the complete set of first-order partial derivatives. 58
  • 59. Limb Profile Fitting L I N E AR E Q U AT I O N S AN D C O E F F I C I E N T M AT R I X : After deriving the first-order partial derivatives, the system of linear equations is formed to solve for the parameters of the ellipse. The linear equations can be represented in matrix form as follows: M: is the coefficient matrix a=[A,B,C,D,E,F]T: is the vector of ellipse parameters. T: Denotes the transpose of a matrix. 59
  • 60. Limb Profile Fitting L I N E AR E Q U AT I O N S AN D C O E F F I C I E N T M AT R I X : Linear equations can be rewritten as follows: 60
  • 61. Limb Profile Fitting G AU S S I A N E L I M I N AT I O N M E T H O D 61
  • 62. Limb Profile Fitting V AL I D I T Y C H E C K U S I N G E L L I P S E I N E Q U AL I T Y C O N S T R AI N T 62
  • 64. Limb Profile Fitting L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : Levenberg-Marquardt (LM) is an optimization algorithm used for solving nonlinear least squares problems. In the context of ellipse fitting, LM can be applied to iteratively refine the parameters of an ellipse model in order to minimize the difference between the observed data points and the predicted ellipse. As before, the goal is to optimize the parameters a of the model curve f(a,xi) so that the sum of the squares of the deviations becomes minimal: 64
  • 65. Limb Profile Fitting L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : To start a minimization, an initial guess for the parameter vector a=[A,B,C,D,E,F]T is required. In each iteration step, the parameter vector a is then replaced by a new estimate a+δ. To determine the value of δ, the functions f(a+δ,xi) are approximated by their linearization: f(a): is the function value at the current estimate of the parameters a. Ji: is the Jacobian matrix evaluated at the current estimate, representing the partial derivatives of f with respect to the parameters. 65
  • 66. Limb Profile Fitting L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : At the minimum of the sum of squares S(a), the gradient of S with respect to δ should be zero. The above first-order approximation of f(a+δ,xi) gives: S(a+δ): is the objective function at the updated parameter values a+δ. 𝛴ⅈ=1 𝑛 [ f(a,xi) + Ji δ ] 2: represents the current value of the objective function at the current parameter values a. Ji: is the Jacobian matrix, representing the partial derivatives of f with respect to the parameters, evaluated at the current estimate a. δ: is a vector that represents the correction or perturbation to the parameters. 66
  • 67. Limb Profile Fitting L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : or in vector notation: S(a+δ): is the objective function at the updated parameter values a+δ. f(a): is the function value at the current estimate of the parameters a. J is the Hessian matrix or the Jacobian matrix, representing the second-order partial derivatives of S with respect to the parameters, evaluated at the current estimate a. δ: is a vector that represents the correction or perturbation to the parameters. 67
  • 68. Limb Profile Fitting L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : Taking the derivative with respect to δ and setting it to zero: gives: where J is the Jacobian matrix whose i-th row equals Ji, and where f(a) is the vector with i-th component f(a,xi). 68
  • 69. Limb Profile Fitting L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : In order to control the convergence rate and accuracy, a damping factor λ is introduced here. can be transformed as: To avoid the defect of slow convergence in the direction of a small gradient, the identity matrix I is replaced with the diagonal matrix consisting of the diagonal elements diag(JTJ) 69
  • 70. Limb Profile Fitting L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : All that is left to do is to solve the above linear equations, and then get the value of δ and the new ellipse parameter estimated value a+δ. With repeatedly iterated calculations, we can obtain the optimal estimate of afinal= (A,B,C,D,E,F)T If these parameters meet the constraint condition 4AC−B2 > 0, then we can conclude that afinal= (A,B,C,D,E,F)T is the optimal estimated ellipse parameters. 70
  • 72. Centroid Extraction L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : The goal is to extract the centroid of a celestial body, specifically a planet, from an image. This is achieved through edge detection and ellipse fitting methods. x0, y0: Coordinates of the ellipse center. 72
  • 73. Centroid Extraction L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : a: Semi-major axis. b: Semi-minor axis. 73
  • 74. Centroid Extraction L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : ϕ: Inclined angle from the x-axis to the ellipse major axis. 74
  • 76. Simulation Results And Analysis L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : Testing and analysis of an image processing algorithm developed for deep-space autonomous optical navigation. The algorithm is tested using both real images from the MESSENGER mission and synthetic simulated images. 76
  • 77. Simulation Results And Analysis L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : Display the results of the image processing algorithm applied to real images containing Mercury and Venus. Fitted ellipses (red and blue outlines) are superimposed on the raw images. Both ellipse fitting algorithms accurately acquire the limb profiles of Mercury and Venus. Pseudo-edge data points in the interior of the planet limb are eliminated by threshold segmentation before Canny edge detection. 77
  • 78. Simulation Results And Analysis L E V E N B E R G - M AR Q U A R D T - B A S E D E L L I P S E F I T T I N G : Display the results of the image processing algorithm applied to real images containing Mercury and Venus. Fitted ellipses (red and blue outlines) are superimposed on the raw images. Both ellipse fitting algorithms accurately acquire the limb profiles of Mercury and Venus. Pseudo-edge data points in the interior of the planet limb are eliminated by threshold segmentation before Canny edge detection. 78
  • 79. Simulation Results And Analysis 79
  • 80. Simulation Results And Analysis 80
  • 81. Simulation Results And Analysis 81
  • 82. Simulation Results And Analysis 82
  • 83. Conclusion In conclusion, the development of robust Autonomous Optical Navigation (OPNAV) capabilities is imperative for the success of future deep-space exploration missions, whether robotic or manned. This study introduces novel image processing algorithms, specifically the Least Squares-based and Levenberg-Marquardt-based ellipse fitting methods, designed to meet the demanding requirements of autonomous OPNAV. Through comprehensive testing on both real flight images from the MESSENGER mission and simulated scenarios, the proposed algorithms demonstrate high accuracy. Maximum ellipse fitting errors are consistently reported to be below three pixels, and the error of the line-of-sight vector to the object center remains less than 1.67×10 −4rad. Looking forward, the research suggests a future focus on an end-to-end assessment of the complete autonomous OPNAV algorithm, integrating the developed image processing techniques with a representative dynamics model and navigation filter algorithm. This holistic approach aims to further enhance the performance and reliability of autonomous space exploration systems, marking a significant stride in advancing our capabilities for deep-space 83
  • 84. Thank You Presenter Name: Ahmad AL-Zahran 84