SlideShare a Scribd company logo
Next-Generation Computer
Vision Methods for Automated
Navigation of Unmanned
Aircraft
Julie Buquet
Scientist, Imaging
Immervision
Unmanned Aircraft Systems: Applications &
Challenges
2
• Small flying devices using vision algorithm on onboard camera for
autonomous navigation.
• Broadly used in various applications: security, corp analysis, entertainment.
• Level of autonomy requires a constant image quality in a various range
of scenarios and presenting many challenges.
Integration
• Dimensions
• Weights
• Outdoor conditions
• Power consumption
Requirements
• Best image quality
• Extended FOV
source
© Immervision
Challenges of Drone Navigation: Overview
3
Navigation in a broad range of scenarios Provide constant performance of computer vision algorithms
• How to maintain constant performance of corner detection in varying illumination?
• How to compensate for the increase in spatial and temporal noise in low-light?
• How do we maintain real-time analysis?
• How to choose a camera for optimized performance of off-the-shelf neural networks?
• Can we predict camera parameters’ influence on learning-based algorithms?
• What is the impact of a camera degradation during its lifespan?
• Can we optimize camera parameters to minimize this impact?
Kahaki, S.M.M.; Nordin, M.J.; Ashtari, A.H
Contour-Based Corner Detection and
Classification by Using Mean Projection
Transform. Sensors 2014, 14, 4126-4143.
https://doi.org/10.3390/s140304126
. Zhu, Pengfei & Wen, Longyin & Bian, Xiao & Ling, Haibing
& Hu, Qinghua. (2018). Vision Meets Drones: A Challenge.
© Immervision
4
Use Case 1: Corner Detection
Under Varying Illumination Scenarios
© Immervision
Corner Detection for Drone Navigation
5
Obstacle avoidance:
The obstacle (tree branch) extremities can be detected
using corner detection.
Improved autonomous navigation, lower risk of collision.
Geometric pattern and QR Code recognition:
We can identify a precise geometric pattern.
Such patterns are used as target for:
• Precision landing.
• Increasing drone autonomy.
• Landing without GPS information.
Principle: Identify corners in an image using intensity variations to detect potential object,
obstacle, and features of interest.
Kahaki, S.M.M.; Nordin, M.J.; Ashtari, A.H. Contour-Based Corner
Detection and Classification by Using Mean Projection
Transform. Sensors 2014, 14, 4126-4143.
https://doi.org/10.3390/s140304126
Yang, Tao & Ren, Qiang & Zhang, Fangbing & Xie, Bolin & Ren, Hailei & Li, Jing &
Zhang, Yanning. (2018). Hybrid camera array-based UAV auto-landing on moving UGV
in GPS-denied environment. Remote Sensing. 10. 1829. 10.3390/rs10111829.
© Immervision
Current Status
6
Problem Statement:
• Traditional navigation camera performs well
in daylight.
• Challenging lighting conditions impact pixel
quality and then CV algorithm accuracy.
Examples:
• The decreasing signal in low-light reduces
the amount of information available for
scene understanding.
• The increased level of noise in low-light can
bias the algorithm, which is then more
likely to perform false detections.
Solution:
• Camera modules and algorithms must be
customized for low-light conditions.
Corner detection for the same pattern under different levels of light
© Immervision
Existing Solutions: Optimize the Camera Sensor
7
Current methods for better image quality in low-light sensor
optimization:
• Use a gray-scale sensor to increase the SNR
• Increase pixel size
Promising methods usually affect the image quality for higher
illumination.
Complement this sensor optimization with corner
detector optimization.
Provide constant corner detection in a broad range of
illumination.
© Immervision
Existing Solution: Traditional Corner Detectors
8
• Harris detector: Corner detection using the gradient at one point (x,y)
𝑖𝑓 𝑅 = det 𝑀 − 𝐾𝑡𝑟 𝑀 > 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 → 𝑐𝑜𝑟𝑛𝑒𝑟
No gradient Strong gradient
in one direction:
line
Strong gradient in
two directions:
corner
Threshold is customized depending on the camera
and the outdoor conditions.
© Immervision
Limitations
9
• Detection highly depends on illumination conditions
• In low light: the intensity of a corner is similar to the noise level: False
detection
Add feature clustering to create a more robust detection
(elimination of spatial noise).
© Immervision
Our Method: Spatial Noise Reduction Using
Feature Clustering
10
Feature clustering :
Each corner is associated to one feature of interest.
We identify a cluster by computing its barycenter.
Results:
• Outliers (clusters with too few pixels) are avoided
• Spatial noise is reduced
• All corners are detected at 0.5 Lux
We obtain the same corner detection as in
daylight for extremely low light (0.5 lux)
0.5 Lux
© Immervision
Results after Temporal Noise Reduction
11
0.5 Lux 3 Lux 15 Lux
All levels of illumination provide accurate corner detection
100% of corners are detected.
© Immervision
Limitations: Residual Temporal Noise
12
There is still residual temporal noise which induces a variation of the estimated position of the
corner:
• Applying noise reduction on low-light images might induce a loss of information.
• Averaging corner detection on several frames will limit all applications requiring real-time.
We will evaluate the impact of the temporal noise on each frame compared to the ground truth
(average position estimated from 20 frames).
Frame n Frame n+5
True Positive : Corners detected in ground truth and current frame
False Negative : Corners only identified in the ground truth
False Positive : Corners only identified in the current frame
© Immervision
Evaluation of the Residual Temporal Noise
13
Corresponding displacement of cluster center:
Lux
Lux
Score
Pixel
displacement
(Barycenter)
1 pixel
95 %
77.5%
In our case, such metrics can be
interpreted as a confidence score
for the current corner detected
𝑃 =
𝑇𝑃
𝑇𝑃 + 𝐹𝑃
, 𝑅 =
𝑇𝑃
𝑇𝑃 + 𝐹𝑁
, 𝐹1 =
2 ∗ 𝑅 ∗ 𝑃
𝑅 + 𝑃
© Immervision
Conclusions on Low-Light Illumination
Corner Detection
14
• Corner detection algorithms can be adapted to a broad range of illumination by using spatial filtering
methods such as feature clustering.
• This also reduces the temporal noise which allows consistent corner location across time.
• Each frame can provide accurate corner detection independently, which makes the algorithm able to run
real-time.
An algorithm optimized for constant performance across illumination levels becomes a powerful
KPI for camera evaluation in a context of machine vision.
© Immervision
15
Use Case 2 : Object Detection With
Varying Blur from Camera Defocus
© Immervision
2D Object Detection and Identification
16
Goals:
• Detecting an object and labelling it as one out of many available classes of object.
• Providing a better scene understanding to improve decision making.
Current Solution:
• Yolov4 pretrained on MSCoco : Real-time and lighter architecture.
• Broadly used in automotive : Good candidate for drone navigation.
© Immervision
2D Object Detection and Identification
17
• Image quality is not constant when a camera is exposed to outdoor condition (motion, temperature shifts etc.).
• What is the impact of image quality degradation on neural network performance?
• Is it possible to optimize a camera to minimize this impact?
© Immervision
Our Camera Simulation Algorithm
18
L2
L3
Aberration-free image Resulting degraded image Performance on off-the-
shelf neural network
L1
Lens sensor
In focus system
Camera simulation with different
degradations
Front focus
Rear focus (strong)
© Immervision
Experimental Procedure
19
Protocol:
• PixSet dataset : 29 000 images taken on Canadian roads (urban environment) via
a 180° road-facing camera placed on the rear-view mirror.
• We generate 21 datasets each corresponding to the same automotive camera
with a value of defocus.
Metrics:
• Precision: How many objects identified are correct
• Recall: How many detections are correct among all objects
to detect (sensitivity)
• F1: combination of both (global evaluation)
𝑃 =
𝑇𝑃
𝑇𝑃 + 𝐹𝑃
, 𝑅 =
𝑇𝑃
𝑇𝑃 + 𝐹𝑁
, 𝐹1 =
2 ∗ 𝑅 ∗ 𝑃
𝑅 + 𝑃
True
Positive
(TP)
False
Positive
(FP)
False
Negative
(FN)
True
Negative
(TN)
True label
Positive Negative
Predicted
label
Positive
Negative
© Immervision
Results
20
• Stationary results below 5 μm of defocus (only
2.5% drop on F1-score)
• Slow drop until reaching 5% at 15.38 μm
At ʎ=550 nm:
A drop of 2% is observed for a defocus of 5.84
µm and corresponds to a temperature shift of
23.19 °C
x
x
Relative
improvement
(%)
Defocus in µm
© Immervision
Results: Zoom-in
21
Defocus increases from upper left to bottom right video: (in µm : 0, 5.81, 14.64, 22.12).
Large defocus reduces the accuracy of object detection algorithm.
© Immervision
Optimization of the Camera
22
• Stationary results with a relative drop of 0.6% until 6 µm
• Linear decrease down to -2% (still acceptable) at 10 µm
• The evolution with defocus is the same as before but the
amplitude of the drop is way smaller
At ʎ=550 nm :
A drop of 2% is observed for a defocus of 10.21 µm and
corresponds to a temperature shift of 43.67 °C (this
corresponds to an altitude variation of 6.5 km).
We can limit the impact of defocus on 2D Object detection by optimizing camera parameters.
Here, a smaller f-number is beneficial due to the small pixel size (1 µm) which might differ if we consider another
camera module with different initial optical parameters.
Relative
improvement
(%)
Defocus in µm
© Immervision
Conclusions on Object Detection with Image
Quality Degradation
23
• Simulating image quality degradation is useful to predict its impact on the performance of learning-based
algorithms.
• By integrating the camera in the simulation, it is possible to predict its performance during its lifespan.
We can optimize optical parameters to limit the impact of image quality degradation on vision
algorithms.
(This is a case study; the precise required resolution needed and the tolerated temperature shift need to be evaluated for each use case.)
© Immervision
Take Aways
24
• Traditional computer vision can be adapted to fit drone
navigation by using spatial noise filtering for improved
performance in low light.
• Being able to simulate a camera and image quality
degradation during its lifespan can help optical parameter
optimization for improved performance of learning-based
algorithms.
• Optimized algorithms can be used as KPI in a context of
machine vision.
© Immervision
Resources
2023 Embedded Vision
Summit
Visit our booth 711!
25
Immervision website: https://www.immervision.com/
Previous publications:
Xavier Dallaire, Julie Buquet, Patrice Roulet, Jocelyn Parent, Pierre Konen, Jean-
François Lalonde, Simon Thibault, "Enhancing learning-based computer vision
algorithms accuracy in sUAS using navigation wide-angle cameras," Proc. SPIE
11870, Artificial Intelligence and Machine Learning in Defense Applications III,
1187009 (12 September 2021); https://doi.org/10.1117/12.2600197
J. Buquet, S.-G. Beauvais, J. Parent, P. Roulet, S. Thibault, "Next-generation of
sUAS 360 surround vision cameras designed for automated navigation in low-light
conditions," Proc. SPIE 12274, Emerging Imaging and Sensing Technologies for
Security and Defence VII, 122740L (7 December 2022);
https://doi.org/10.1117/12.2639024
YOLOv4: Optimal Speed and Accuracy of Object Detection,Alexey Bochkovskiy,
Chien-Yao Wang, Hong-Yuan Mark Liao, 2020
PixSet : An Opportunity for 3D Computer Vision to Go Beyond Point Clouds With a
Full-Waveform LiDAR Dataset, Jean-Luc Déziel, Pierre Merriaux, Francis Tremblay,
Dave Lessard, Dominique Plourde, Julien Stanguennec, Pierre Goulet, Pierre
Olivier,2021
© Immervision
26
2020 Robert-Bourassa Blvd., Suite 2320
Montreal, Quebec
H3A 2A5 Canada
+1 (514) 985-4007
www.immervision.com
THANK YOU
© Immervision
Blur induced by a camera defocus
27
Negative defocus
Positive defocus
Lens Camera sensor
In focus system
Front focus Rear focus
© Immervision
Defocus due to temperature shift
28
At 𝜆=550nm a drop of 2% in the performances :
𝑓# = 4 : 5.81 µm / 23.19 °C
𝑓# = 2 : 10.94 µm / 43.67 °C
Δ𝑓 = 𝛿𝐺𝑓Δ𝑇
Δ𝐿 = 𝛽𝑓Δ𝑇
© Immervision

More Related Content

PPTX
Night vision system in automobiles
PPT
Camera Calibration Market
PDF
“Selecting Image Sensors for Embedded Vision Applications: Three Case Studies...
PPTX
Ee 417 Senior Design
PDF
Light Field Technology
PPTX
final ppt
PPTX
SECURICO CCTV BOOK
PDF
"Distance Estimation Solutions for ADAS and Automated Driving," a Presentatio...
Night vision system in automobiles
Camera Calibration Market
“Selecting Image Sensors for Embedded Vision Applications: Three Case Studies...
Ee 417 Senior Design
Light Field Technology
final ppt
SECURICO CCTV BOOK
"Distance Estimation Solutions for ADAS and Automated Driving," a Presentatio...

Similar to “Next-generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft,” a Presentation from Immervision (20)

PPTX
Coded Photography - Ramesh Raskar
PPTX
Night vision technology applications
PDF
Linovision intelligent ipc key features introduction
PDF
"How to Choose a 3D Vision Sensor," a Presentation from Capable Robot Components
PDF
高解析度面板瑕疵檢測
PDF
4th Year Project Presentation Slides
PPTX
Flash Photography and toonification
PPTX
Realtime pothole detection system using improved CNN Models
PDF
An Efficient Algorithm for Edge Detection of Corroded Surface
PDF
An Efficient Algorithm for Edge Detection of Corroded Surface
PDF
Deblurring of License Plate Image using Blur Kernel Estimation
PDF
“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...
PPTX
Machine vision.pptx
PDF
“Introduction to Depth Sensing,” a Presentation from Meta
PDF
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
PDF
A05520108
PDF
“Image Sensors to Enable Low-cost and Low-power Computer Vision Applications,...
PPT
Raskar Coded Opto Charlotte
PDF
Step Into Security Webinar - IP Security Camera Techniques for Video Surveill...
PPT
Coded Photography - Ramesh Raskar
Night vision technology applications
Linovision intelligent ipc key features introduction
"How to Choose a 3D Vision Sensor," a Presentation from Capable Robot Components
高解析度面板瑕疵檢測
4th Year Project Presentation Slides
Flash Photography and toonification
Realtime pothole detection system using improved CNN Models
An Efficient Algorithm for Edge Detection of Corroded Surface
An Efficient Algorithm for Edge Detection of Corroded Surface
Deblurring of License Plate Image using Blur Kernel Estimation
“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Pres...
Machine vision.pptx
“Introduction to Depth Sensing,” a Presentation from Meta
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
A05520108
“Image Sensors to Enable Low-cost and Low-power Computer Vision Applications,...
Raskar Coded Opto Charlotte
Step Into Security Webinar - IP Security Camera Techniques for Video Surveill...
Ad

More from Edge AI and Vision Alliance (20)

PDF
“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advan...
PDF
“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,”...
PDF
“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceler...
PDF
“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applicat...
PDF
“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
PDF
“Quantization Techniques for Efficient Deployment of Large Language Models: A...
PDF
“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation fr...
PDF
“Introduction to Radar and Its Use for Machine Perception,” a Presentation fr...
PDF
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
PDF
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
PDF
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
PDF
“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation ...
PDF
“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effe...
PDF
“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, ...
PDF
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
PDF
“A Re-imagination of Embedded Vision System Design,” a Presentation from Imag...
PDF
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
PDF
“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentatio...
PDF
“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips
“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advan...
“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,”...
“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceler...
“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applicat...
“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
“Quantization Techniques for Efficient Deployment of Large Language Models: A...
“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation fr...
“Introduction to Radar and Its Use for Machine Perception,” a Presentation fr...
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation ...
“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effe...
“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, ...
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
“A Re-imagination of Embedded Vision System Design,” a Presentation from Imag...
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentatio...
“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips
Ad

Recently uploaded (20)

PPTX
Machine Learning_overview_presentation.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
Cloud computing and distributed systems.
PPTX
A Presentation on Artificial Intelligence
PDF
Empathic Computing: Creating Shared Understanding
PDF
Electronic commerce courselecture one. Pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
sap open course for s4hana steps from ECC to s4
Machine Learning_overview_presentation.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
The AUB Centre for AI in Media Proposal.docx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Reach Out and Touch Someone: Haptics and Empathic Computing
Review of recent advances in non-invasive hemoglobin estimation
Cloud computing and distributed systems.
A Presentation on Artificial Intelligence
Empathic Computing: Creating Shared Understanding
Electronic commerce courselecture one. Pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
Programs and apps: productivity, graphics, security and other tools
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Unlocking AI with Model Context Protocol (MCP)
sap open course for s4hana steps from ECC to s4

“Next-generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft,” a Presentation from Immervision

  • 1. Next-Generation Computer Vision Methods for Automated Navigation of Unmanned Aircraft Julie Buquet Scientist, Imaging Immervision
  • 2. Unmanned Aircraft Systems: Applications & Challenges 2 • Small flying devices using vision algorithm on onboard camera for autonomous navigation. • Broadly used in various applications: security, corp analysis, entertainment. • Level of autonomy requires a constant image quality in a various range of scenarios and presenting many challenges. Integration • Dimensions • Weights • Outdoor conditions • Power consumption Requirements • Best image quality • Extended FOV source © Immervision
  • 3. Challenges of Drone Navigation: Overview 3 Navigation in a broad range of scenarios Provide constant performance of computer vision algorithms • How to maintain constant performance of corner detection in varying illumination? • How to compensate for the increase in spatial and temporal noise in low-light? • How do we maintain real-time analysis? • How to choose a camera for optimized performance of off-the-shelf neural networks? • Can we predict camera parameters’ influence on learning-based algorithms? • What is the impact of a camera degradation during its lifespan? • Can we optimize camera parameters to minimize this impact? Kahaki, S.M.M.; Nordin, M.J.; Ashtari, A.H Contour-Based Corner Detection and Classification by Using Mean Projection Transform. Sensors 2014, 14, 4126-4143. https://doi.org/10.3390/s140304126 . Zhu, Pengfei & Wen, Longyin & Bian, Xiao & Ling, Haibing & Hu, Qinghua. (2018). Vision Meets Drones: A Challenge. © Immervision
  • 4. 4 Use Case 1: Corner Detection Under Varying Illumination Scenarios © Immervision
  • 5. Corner Detection for Drone Navigation 5 Obstacle avoidance: The obstacle (tree branch) extremities can be detected using corner detection. Improved autonomous navigation, lower risk of collision. Geometric pattern and QR Code recognition: We can identify a precise geometric pattern. Such patterns are used as target for: • Precision landing. • Increasing drone autonomy. • Landing without GPS information. Principle: Identify corners in an image using intensity variations to detect potential object, obstacle, and features of interest. Kahaki, S.M.M.; Nordin, M.J.; Ashtari, A.H. Contour-Based Corner Detection and Classification by Using Mean Projection Transform. Sensors 2014, 14, 4126-4143. https://doi.org/10.3390/s140304126 Yang, Tao & Ren, Qiang & Zhang, Fangbing & Xie, Bolin & Ren, Hailei & Li, Jing & Zhang, Yanning. (2018). Hybrid camera array-based UAV auto-landing on moving UGV in GPS-denied environment. Remote Sensing. 10. 1829. 10.3390/rs10111829. © Immervision
  • 6. Current Status 6 Problem Statement: • Traditional navigation camera performs well in daylight. • Challenging lighting conditions impact pixel quality and then CV algorithm accuracy. Examples: • The decreasing signal in low-light reduces the amount of information available for scene understanding. • The increased level of noise in low-light can bias the algorithm, which is then more likely to perform false detections. Solution: • Camera modules and algorithms must be customized for low-light conditions. Corner detection for the same pattern under different levels of light © Immervision
  • 7. Existing Solutions: Optimize the Camera Sensor 7 Current methods for better image quality in low-light sensor optimization: • Use a gray-scale sensor to increase the SNR • Increase pixel size Promising methods usually affect the image quality for higher illumination. Complement this sensor optimization with corner detector optimization. Provide constant corner detection in a broad range of illumination. © Immervision
  • 8. Existing Solution: Traditional Corner Detectors 8 • Harris detector: Corner detection using the gradient at one point (x,y) 𝑖𝑓 𝑅 = det 𝑀 − 𝐾𝑡𝑟 𝑀 > 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 → 𝑐𝑜𝑟𝑛𝑒𝑟 No gradient Strong gradient in one direction: line Strong gradient in two directions: corner Threshold is customized depending on the camera and the outdoor conditions. © Immervision
  • 9. Limitations 9 • Detection highly depends on illumination conditions • In low light: the intensity of a corner is similar to the noise level: False detection Add feature clustering to create a more robust detection (elimination of spatial noise). © Immervision
  • 10. Our Method: Spatial Noise Reduction Using Feature Clustering 10 Feature clustering : Each corner is associated to one feature of interest. We identify a cluster by computing its barycenter. Results: • Outliers (clusters with too few pixels) are avoided • Spatial noise is reduced • All corners are detected at 0.5 Lux We obtain the same corner detection as in daylight for extremely low light (0.5 lux) 0.5 Lux © Immervision
  • 11. Results after Temporal Noise Reduction 11 0.5 Lux 3 Lux 15 Lux All levels of illumination provide accurate corner detection 100% of corners are detected. © Immervision
  • 12. Limitations: Residual Temporal Noise 12 There is still residual temporal noise which induces a variation of the estimated position of the corner: • Applying noise reduction on low-light images might induce a loss of information. • Averaging corner detection on several frames will limit all applications requiring real-time. We will evaluate the impact of the temporal noise on each frame compared to the ground truth (average position estimated from 20 frames). Frame n Frame n+5 True Positive : Corners detected in ground truth and current frame False Negative : Corners only identified in the ground truth False Positive : Corners only identified in the current frame © Immervision
  • 13. Evaluation of the Residual Temporal Noise 13 Corresponding displacement of cluster center: Lux Lux Score Pixel displacement (Barycenter) 1 pixel 95 % 77.5% In our case, such metrics can be interpreted as a confidence score for the current corner detected 𝑃 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 , 𝑅 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 , 𝐹1 = 2 ∗ 𝑅 ∗ 𝑃 𝑅 + 𝑃 © Immervision
  • 14. Conclusions on Low-Light Illumination Corner Detection 14 • Corner detection algorithms can be adapted to a broad range of illumination by using spatial filtering methods such as feature clustering. • This also reduces the temporal noise which allows consistent corner location across time. • Each frame can provide accurate corner detection independently, which makes the algorithm able to run real-time. An algorithm optimized for constant performance across illumination levels becomes a powerful KPI for camera evaluation in a context of machine vision. © Immervision
  • 15. 15 Use Case 2 : Object Detection With Varying Blur from Camera Defocus © Immervision
  • 16. 2D Object Detection and Identification 16 Goals: • Detecting an object and labelling it as one out of many available classes of object. • Providing a better scene understanding to improve decision making. Current Solution: • Yolov4 pretrained on MSCoco : Real-time and lighter architecture. • Broadly used in automotive : Good candidate for drone navigation. © Immervision
  • 17. 2D Object Detection and Identification 17 • Image quality is not constant when a camera is exposed to outdoor condition (motion, temperature shifts etc.). • What is the impact of image quality degradation on neural network performance? • Is it possible to optimize a camera to minimize this impact? © Immervision
  • 18. Our Camera Simulation Algorithm 18 L2 L3 Aberration-free image Resulting degraded image Performance on off-the- shelf neural network L1 Lens sensor In focus system Camera simulation with different degradations Front focus Rear focus (strong) © Immervision
  • 19. Experimental Procedure 19 Protocol: • PixSet dataset : 29 000 images taken on Canadian roads (urban environment) via a 180° road-facing camera placed on the rear-view mirror. • We generate 21 datasets each corresponding to the same automotive camera with a value of defocus. Metrics: • Precision: How many objects identified are correct • Recall: How many detections are correct among all objects to detect (sensitivity) • F1: combination of both (global evaluation) 𝑃 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 , 𝑅 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 , 𝐹1 = 2 ∗ 𝑅 ∗ 𝑃 𝑅 + 𝑃 True Positive (TP) False Positive (FP) False Negative (FN) True Negative (TN) True label Positive Negative Predicted label Positive Negative © Immervision
  • 20. Results 20 • Stationary results below 5 μm of defocus (only 2.5% drop on F1-score) • Slow drop until reaching 5% at 15.38 μm At ʎ=550 nm: A drop of 2% is observed for a defocus of 5.84 µm and corresponds to a temperature shift of 23.19 °C x x Relative improvement (%) Defocus in µm © Immervision
  • 21. Results: Zoom-in 21 Defocus increases from upper left to bottom right video: (in µm : 0, 5.81, 14.64, 22.12). Large defocus reduces the accuracy of object detection algorithm. © Immervision
  • 22. Optimization of the Camera 22 • Stationary results with a relative drop of 0.6% until 6 µm • Linear decrease down to -2% (still acceptable) at 10 µm • The evolution with defocus is the same as before but the amplitude of the drop is way smaller At ʎ=550 nm : A drop of 2% is observed for a defocus of 10.21 µm and corresponds to a temperature shift of 43.67 °C (this corresponds to an altitude variation of 6.5 km). We can limit the impact of defocus on 2D Object detection by optimizing camera parameters. Here, a smaller f-number is beneficial due to the small pixel size (1 µm) which might differ if we consider another camera module with different initial optical parameters. Relative improvement (%) Defocus in µm © Immervision
  • 23. Conclusions on Object Detection with Image Quality Degradation 23 • Simulating image quality degradation is useful to predict its impact on the performance of learning-based algorithms. • By integrating the camera in the simulation, it is possible to predict its performance during its lifespan. We can optimize optical parameters to limit the impact of image quality degradation on vision algorithms. (This is a case study; the precise required resolution needed and the tolerated temperature shift need to be evaluated for each use case.) © Immervision
  • 24. Take Aways 24 • Traditional computer vision can be adapted to fit drone navigation by using spatial noise filtering for improved performance in low light. • Being able to simulate a camera and image quality degradation during its lifespan can help optical parameter optimization for improved performance of learning-based algorithms. • Optimized algorithms can be used as KPI in a context of machine vision. © Immervision
  • 25. Resources 2023 Embedded Vision Summit Visit our booth 711! 25 Immervision website: https://www.immervision.com/ Previous publications: Xavier Dallaire, Julie Buquet, Patrice Roulet, Jocelyn Parent, Pierre Konen, Jean- François Lalonde, Simon Thibault, "Enhancing learning-based computer vision algorithms accuracy in sUAS using navigation wide-angle cameras," Proc. SPIE 11870, Artificial Intelligence and Machine Learning in Defense Applications III, 1187009 (12 September 2021); https://doi.org/10.1117/12.2600197 J. Buquet, S.-G. Beauvais, J. Parent, P. Roulet, S. Thibault, "Next-generation of sUAS 360 surround vision cameras designed for automated navigation in low-light conditions," Proc. SPIE 12274, Emerging Imaging and Sensing Technologies for Security and Defence VII, 122740L (7 December 2022); https://doi.org/10.1117/12.2639024 YOLOv4: Optimal Speed and Accuracy of Object Detection,Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao, 2020 PixSet : An Opportunity for 3D Computer Vision to Go Beyond Point Clouds With a Full-Waveform LiDAR Dataset, Jean-Luc Déziel, Pierre Merriaux, Francis Tremblay, Dave Lessard, Dominique Plourde, Julien Stanguennec, Pierre Goulet, Pierre Olivier,2021 © Immervision
  • 26. 26 2020 Robert-Bourassa Blvd., Suite 2320 Montreal, Quebec H3A 2A5 Canada +1 (514) 985-4007 www.immervision.com THANK YOU © Immervision
  • 27. Blur induced by a camera defocus 27 Negative defocus Positive defocus Lens Camera sensor In focus system Front focus Rear focus © Immervision
  • 28. Defocus due to temperature shift 28 At 𝜆=550nm a drop of 2% in the performances : 𝑓# = 4 : 5.81 µm / 23.19 °C 𝑓# = 2 : 10.94 µm / 43.67 °C Δ𝑓 = 𝛿𝐺𝑓Δ𝑇 Δ𝐿 = 𝛽𝑓Δ𝑇 © Immervision