SlideShare a Scribd company logo
Introduction to Modern
LiDAR for Machine
Perception
Robert Laganière
CEO / Professor
Sensor Cortek inc. / University of Ottawa
Canada
• LiDAR fundementals
• LiDAR principles
• LiDAR taxonomy
• LiDAR processing (AI)
Presentation outline
2
© 2023 Sensor Cortek inc
• LiDAR (or CoLiDAR) name derived from the RADAR acronym
• LiDAR refers to the technology that uses a laser to sense the
environment
• Same fundamental principle as radar (or sonar)
• Emitting a signal and analyzing the bounced back signal
(the echo)
LiDAR: Light Detection and Ranging
3
© 2023 Sensor Cortek inc
Main benefit of using coherent collimated light
beam (i.e., laser)
4
© 2023 Sensor Cortek inc
radar signal
laser
light
• The laser light used by LiDAR has two interesting properties:
• It is coherent: all emitted light rays have the same
frequency and phase
• It is collimated: the beam of light has parallel rays and
spread minimally
LiDAR sequence
5
© 2023 Sensor Cortek inc
Corresponding camera sequence
6
© 2023 Sensor Cortek inc
did you perceive the same things?
• It produces direct 3D information
• It provides accurate 3D measurements
• From under a mm to few cm depending on the distance
• It’s an active sensor that operates day and night
• It can capture information at long range
• ~200 m
• It has a large FoV
• even 360o
What makes LiDAR an attractive sensor?
7
© 2023 Sensor Cortek inc
• It captures shape but not appearance
• It produces sparse data
• Sometimes only few points on an object
• It becomes noisy under fog, snow and rain
• It is still an expensive sensor
• Some are thousands of $
• While radars and cameras could be less than $100
• It often includes mechanical parts
However, LiDAR is not perfect
8
© 2023 Sensor Cortek inc
• A collection of
sparse 3D points
• A LiDAR frame
• Points not
captured at
exactly the
same time….
The LiDAR point cloud
9
© 2023 Sensor Cortek inc
• 1900s: Planck, Einstein and others “discovered” the photon
• May 16th 1960: first laser light produced (T. Maiman)
• A laser is a device that generates an intense beam
of coherent monochromatic light
• Light Amplification by Stimulated Emission of Radiation
• Laser differs from other light sources because it emits coherent
collimated light
• Laser light is very narrow, making it possible to see the smallest
details with high resolution at relatively long distances.
History of Laser
10
© 2023 Sensor Cortek inc
• 1960s: LiDAR initially developed for metrology and atmospheric
research
• But the idea of probing the atmosphere with light can be dated back
to the 1930s
• 1970s: LiDAR used for terrain mapping
• Apollo missions used laser to accurately measure Earth-Moon
distances
• 1980s: with the advent of GPS and inertial measurement units
(IMUs), LiDAR became very accurate
• 2005: first AV to complete the DARPA Grand Challenge (142 mile
desert course) was equipped with a LiDAR
History of LiDAR
© 2023 Sensor Cortek inc
Mt St-Helens
Wikimedia commons
11
1. The LiDAR emits pulsed light waves
2. The light potentially hits a surrounding object and bounces back
to the sensor
3. The sensor reads the bounced signal, estimates the time it took
to return to the LiDAR and measures the reflected light energy
• Simple technology; almost instantaneous!
• Potential interference from the sun and other LiDARs
How does LiDAR work? Pulsed LiDAR
12
© 2023 Sensor Cortek inc
How does LiDAR work?
13
© 2023 Sensor Cortek inc
blind region
1.
3.
2.
This configuration is called bistatic (most common and cheaper)
A monostatic optical system aligns the Tx and Rx for better detection
1. LiDAR can also use continuous waves
• Frequency Modulated Continuous Wave
2. The phase of the bounced back signal will differ from the emitted signal
3. The change in phase is used to extract the distance information
• This is done by mixing the emitted and received signals (as done in
radar)
• Velocity is a bonus!
• Virtually no interference
• You must read a longer signal (stay longer at each point)
How does LiDAR work? FMCW Lidar
14
© 2023 Sensor Cortek inc
15
© 2023 Sensor Cortek inc
How does LiDAR work? FMCW Lidar
www.bridgerphotonics.com/blog/frequency-
modulated-continuous-wave-fmcw-lidar
• LiDAR physics is governed by one simple equation
Distance of the object = (Speed of Light x Time of Flight) / 2
• But to be able to read the received light, you need power
(i.e., enough photons bouncing back)
Power received ≈ Power transmitted x Cross Section x Optic area
Distance2 Distance2
• The laser cross section is the average amount of optical
power returned by the target
LiDAR mathematics
16
© 2023 Sensor Cortek inc
LiDAR sensor taxonomy
© 2023 Sensor Cortek inc
LiDAR
Scanning
LiDAR
Mechanical
Scanners
Rotating
mirrors
Rotating
prisms
MEMS
mirrors
Electronic
Scanners
Optical
Phased
Array
Imaging
LiDAR
Flash LiDAR
Solid-state LiDAR
17
• Three strategies:
• Scanning LiDAR
• A laser scans the scene and a single photodetector is used to read
the returned photons
• Flash LiDAR
• The entire field of view is illuminated and a photodetector array
captures the received photons
• Optical Phased Arrays
• Several transmitters emitting laser light at different phases enabling
the steering of the beam (constructive/destructive interference)
Capturing a scene with LiDAR
18
© 2023 Sensor Cortek inc
• Shorter range – higher frame rate
• Costly focal plane array
• Limited FoV
• Light is distributed across the FoV – more noisy
• Pixels are small – more power required
• Angular resolution determined by the pixel density
• No motion distortion
• No moving parts
Flash LiDAR
19
© 2023 Sensor Cortek inc
• Longer range – smaller frame rate
• Expensive scanning mechanism
• e.g., spinning mirrors, MEMS mirrors
• or rotate everything
• Can be bulky
• LiDAR motion must be compensated
• Less tolerant to mechanical vibrations
Scanning LiDAR
20
© 2023 Sensor Cortek inc
• Today’s most popular solution
• Heavier than other solutions
• Vulnerable to vibrations
• Generally includes stack of
photodetectors to scan in
several horizontal layers
Scanning with rotating mirrors
(or rotate everything)
21
© 2023 Sensor Cortek inc
precisionlaserscanning.com/2017/12/mems-mirrors-vs-polygon-
scanners-for-lidar-in-autonomous-vehicles/
• Uses two (or more) sequential prisms
to steer the beam
• Shape of prism and speed of rotation
determines the scan pattern
• Limited FoV
Scanning with rotating prisms
22
© 2023 Sensor Cortek inc
Lidars for vehicles: from the requirements to the
technical evaluation, Z. Dai et al., Conference:
9th International Forum on Automotive Lighting,
2021
• Micro-Electro-Mechanical System
• Quasi solid-state
• Programmable scan patterns
• Limited FoV
• Requires careful calibration
Scanning with MEMS mirrors
23
© 2023 Sensor Cortek inc
preciseley.com/product/mems-scanning-mirror/
• Lower resolution – high frame rate
• Solid state
• No moving parts
• Smart zooming capability
• Interference from the antenna lobes
limits the angular resolution
• Complex design
• Lower production cost
• Probably the solution of the future
Optical phased array
24
© 2023 Sensor Cortek inc
MEMS Mirrors for LiDAR: A Review,
D. Wang, C. Watkins, H. Xie. Micromachines 2020, 11(5)
Rotating Prisms MEMS OPA Flash
Range Long Long Long Medium Low
Frame rate Low Low Low High High
FoV Large Limited Limited Limited Limited
Resolution High High High Adaptive Low
Power High High Low Low High
Solid Sate No No Quasi Yes Yes
Vulnerability High High High Low Low
Complexity Low Low High High Low
Cost High High High Low High
Lidar technologies
25
© 2023 Sensor Cortek inc
• LiDAR operates in the near IR
spectrum
• 780 nm to 3000 nm
• Be careful about eye safety!
• Typical field of view:
• 90°, 180° or 360°
• Depth resolution determined by the
temporal sampling frequency
• ∆D = c / 2f
• e.g., a depth resolution of 1 cm
requires a 1.5 GHz sampling rate
Some LiDAR specs
26
© 2023 Sensor Cortek inc
• Angular resolution determined by the
scanning point rate
• e.g., 0.1° corresponds to 18 cm at 100 m
• Pulse frequency
• e.g., 2 ns corresponds to a range
resolution of 3 cm
• Pulse frequency determines the
number of points per second per layer
• e.g., 40 kHz
Specs are usually given for 80%
Lambertian reflectivity
• When the vehicle moves, the LiDAR is scanning a moving scene
• Which will distort the point cloud
• This distortion is proportional to the vehicle’s speed and
inversely proportional to laser scanning rate
• Solution: using an inertial measurement unit in order to
compensate for the sensor motion
• IMUs provide acceleration and angular velocity
• Assumption: the ego vehicle motion has constant angular and
linear velocities
LiDAR motion compensation using IMU
27
© 2023 Sensor Cortek inc
LiDAR motion compensation using IMU
28
© 2023 Sensor Cortek inc
https://www.mathworks.com/help/lidar/ug/m
otion-compensation-in-lidar-point-cloud.html
Before motion compensation After motion compensation
• A LiDAR sensor produces a frame of 3D points
• A point cloud
• This point cloud has to be processed and analyzed in order to
interpret the scene
• e.g., detect objects on the road
• But LiDAR data is sparse and unstructured
• Signal processing (convolution) prefers regular grid sampling
• LiDAR needs to be preprocessed to build a more suitable
representation
Processing LiDAR data for detection
29
© 2023 Sensor Cortek inc
• Possible representations:
• Point sets
• Voxelization
• Bird’s eye view
• Point pillars encoding
• Frontal image generation
• Sparse convolution
LiDAR representation strategies
30
© 2023 Sensor Cortek inc
• Point sets
• Can we work directly on the point cloud?
• Point-based representation
• Voxelization
• Bird’s eye view
• Point pillars encoding
• Frontal image generation
• Sparse convolution
LiDAR representations
31
© 2023 Sensor Cortek inc
• The idea is to consume an unordered set of 3D points
• Transformations are learned to normalize the data
• Global point features are learned from the set
• Point cloud has to be segmented into small regions of
interest
• More difficult to apply in a complex scene composed of
many objects
Point-based representations
32
© 2023 Sensor Cortek inc
C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep
learning on point sets for 3d classification and segmentation.
Proc. Computer Vision and Pattern Recognition (CVPR) 2017
Point-based representations
33
© 2023 Sensor Cortek inc
www.mdpi.com/1999-4907/12/2/131/htm
• The idea is to
consume an
unordered set of 3D
points
• Point sets
• Voxelization
• e.g., occupancy grid
• Bird’s eye view
• Point pillars encoding
• Frontal image generation
• Sparse convolution
LiDAR representations
34
© 2023 Sensor Cortek inc
Attribute Filtering of Urban Point Clouds Using Max-Tree on Voxel Data, F.
Guillotte, Mathematical Morphology and Its Applications to Signal and Image
Processing, 2019.
• A 3D voxel grid is created in which
each voxel contains:
• A scalar value
• A vector made of statistics
computed from the points inside
the voxel
• Mean, variance, reflectance, …
• By nature, the occupied voxels are
very sparse
• 3D convolution expensive and
inefficient
Voxelization
35
© 2023 Sensor Cortek inc
Wang, D.Z.; Posner, I. Voting for voting in online point cloud
object detection. In Proceedings of the Robotics: Science and
Systems, Rome, Italy, 13–17 July 2015
• Voxel feature encoding (VFE) is used
• from randomly sampled points in
each voxel
• using point coordinates and
reflectance
• and a fully connected network
transformation
• 3D convolution is applied on the VFE
Voxelization: VoxelNet
36
© 2023 Sensor Cortek inc
Y. Zhou and O. Tuzel. Voxelnet: End-to-end learning for
point cloud based 3d object detection. In CVPR, 2018
VFE
learning
Convolution
layer
Detection
network
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Vehicle Detection results – KITTI Dataset
37
© 2023 Sensor Cortek inc
• Point sets
• Voxelization
• Bird’s eye view
• Point pillars encoding
• Frontal image generation
• Sparse comvolution
LiDAR representations
38
© 2023 Sensor Cortek inc
• Computationally efficient
• Preserve the metric space for objects
on the road
• The representation produces an
image
• Height becomes a channel
• Image detection network can be
used
• Objects are not occluded
Bird’s eye view network : Pixor
39
© 2023 Sensor Cortek inc
Yang, B., Luo, W., Urtasun, R.: PIXOR: real-time 3D
object detection from point clouds. CVPR (2018)
Vehicle Detection results – KITTI Dataset
40
© 2023 Sensor Cortek inc
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Pixor 81.7 77.05 72.95
• Voxelization
• Bird’s eye view
• Pillars encoding
• Each pillar encodes point
distance to centroid and
reflectance
• Simplified PointNet is used
• Frontal image generation
• Sparse convolution
LiDAR representations
41
© 2023 Sensor Cortek inc
becominghuman.ai/pointpillars-3d-point-clouds-bounding-box-
detection-and-tracking-pointnet-pointnet-lasernet-67e26116de5a
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Pixor 81.7 77.05 72.95
PointPillar 92.07 87.74 86.65
Vehicle Detection results – KITTI Dataset
42
© 2023 Sensor Cortek inc
• Voxelization
• Bird’s eye view
• Point pillars encoding
• Frontal image generation
• Sparse convolution
LiDAR representations
43
© 2023 Sensor Cortek inc
• BEV + LiDAR project + Camera
• BEV is used to propose potential objects
• Multiview features are then uses to predict objects
Object-based feature extractor: MV3D
44
© 2023 Sensor Cortek inc
Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3d
object detection network for autonomous driving. In
Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, Honolulu, HI, USA, 21–26 July
2017
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Pixor 81.7 77.05 72.95
PointPillar 92.07 87.74 86.65
MV3D 86.55 78.1 76.67
Vehicle Detection results – KITTI Dataset
45
© 2023 Sensor Cortek inc
Frontal view fusion
46
© 2023 Sensor Cortek inc
• All images are fused together
• Including the camera image
• Lidar/camera fusion
• BGF fusion operator
Sensor Fusion Operators for Multimodal 2D Object Detection,
M.M. Pasandi, T. Liu, Y. Massoud, R, Laganiere, ISVC 2022
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Pixor 81.7 77.05 72.95
PointPillar 92.07 87.74 86.65
MV3D 86.55 78.1 76.67
BGF Fusion 94.90 88.40 78.38
Vehicle Detection results – KITTI Dataset
47
© 2023 Sensor Cortek inc
• Voxelization
• Bird’s eye view
• Pillars encoding
• Frontal image generation
• Sparse convolution
• When the data is very sparse, regular convolution becomes very
inefficient
• The idea is to compress the representation by ignoring zero values
• To this end, look-up tables are often used
LiDAR representations
48
© 2023 Sensor Cortek inc
SECOND: Sparse convolution on point features
49
© 2023 Sensor Cortek inc
SECOND: Sparsely Embedded Convolutional
Detection by Yan Yan, Yuxing Mao, Bo Li, Sensors
2018.
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Pixor 81.7 77.05 72.95
PointPillar 92.07 87.74 86.65
MV3D 86.55 78.1 76.67
BGF Fusion 94.90 88.40 78.38
SECOND 91.92 87.92 85.39
Vehicle Detection results – KITTI Dataset
50
© 2023 Sensor Cortek inc
PV-RCNN: Point-based + Voxels
51
© 2023 Sensor Cortek inc
PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection (2020)
Shaoshuai Shi, Chaoxu Guo, Li Jiang, Zhe Wang, Jianping Shi, Xiaogang Wang, Hongsheng Li, IEEE Conference on Computer
Vision and Pattern Recognition (CVPR).
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Pixor 81.7 77.05 72.95
PointPillar 92.07 87.74 86.65
MV3D 86.55 78.1 76.67
BGF Fusion 94.90 88.40 78.38
SECOND 91.92 87.92 85.39
PV-RCNN 92.86 88.93 88.74
Vehicle Detection results – KITTI Dataset
52
© 2023 Sensor Cortek inc
• Solving the LiDAR sparsity problem
• The farther the object, the sparser the point density
How to further improve detection?
53
© 2023 Sensor Cortek inc
• Depth completion network
• Can be used to densify the point cloud
Pseudo LiDAR
54
© 2023 Sensor Cortek inc
www.mdpi.com/1424-8220/22/18/6969
Depth Completion with Twin Surface Extrapolation at Occlusion
Boundaries, Saif Imran, Xiaoming Liu, Daniel Morris, CVPR 2021
BTC: using shape completion
55
© 2023 Sensor Cortek inc
Behind the Curtain: Learning Occluded
Shapes for 3D Object Detection, Qiangeng
Xu, Yiqi Zhong, Ulrich Neumann, AAAI 2022
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Pixor 81.7 77.05 72.95
PointPillar 92.07 87.74 86.65
MV3D 86.55 78.1 76.67
BGF Fusion 94.90 88.40 78.38
SECOND 91.92 87.92 85.39
PV-RCNN 92.86 88.93 88.74
BTC 93.46 89.53 87.44
Vehicle Detection results – KITTI Dataset
56
© 2023 Sensor Cortek inc
LiDAR and pseudo-LiDAR: SFD
57
© 2023 Sensor Cortek inc
• Lidar/camera fusion
Sparse Fuse Dense: Towards High Quality 3D Detection with Depth
Completion, Wu, Xiaopei and Peng, Liang and Yang, Honghui and Xie, Liang
and Huang, Chenxi and Deng, Chengqi and Liu, Haifeng and Cai, Deng}, 2022
Mean Average Precision (%)
Easy Medium Hard
VoxelNet 89.35 79.26 77.39
Pixor 81.7 77.05 72.95
PointPillar 92.07 87.74 86.65
MV3D 86.55 78.1 76.67
BGF Fusion 94.90 88.40 78.38
SECOND 91.92 87.92 85.39
PV-RCNN 92.86 88.93 88.74
BTC 93.46 89.53 87.44
SFD 95.85 91.92 91.41
58
© 2023 Sensor Cortek inc
Vehicle Detection results – KITTI Dataset
LiDAR densification: UYI
59
© 2023 Sensor Cortek inc
• Pseudo LiDAR module
Use Your Imagination: A Detector-Independent Approach For LiDAR
Quality Booster, Z. Zhang, T. Liu, R. Laganiere, 2023
LiDAR
detector
network
LiDAR densification: UYI
60
© 2023 Sensor Cortek inc
• Vehicle detection performance boosting
Use Your Imagination: A Detector-Independent Approach For LiDAR
Quality Booster, Z. Zhang, T. Liu, R. Laganiere, 2023
Conclusion
61
© 2023 Sensor Cortek inc
• LiDAR provides accurate 3D detection
• An essential component in ADAS/AV
• LiDAR technology will continue to evolve
• Improve density
• Lower power
• Solid-state
• Lower cost
• Radar / LiDAR Convergence
• C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and
segmentation. CVPR 2017.
• Yang, B., Luo, W., Urtasun, R.: PIXOR: real-time 3D object detection from point clouds. CVPR 2018.
• X. Chen, H. Ma, J. Wan, B. Li, T. Xia, Multi-view 3d object detection network for autonomous driving.
CVPR 2017.
• Yan Yan, Yuxing Mao, Bo Li. SECOND: Sparsely Embedded Convolutional Detection. Sensors 2018.
• S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, H. Li. PV-RCNN: Point-Voxel Feature Set Abstraction
for 3D Object Detection. CVPR 2020.
• X. Wu, L. Peng, H. Yang, L. Xie, C. Huang, C. Deng, H. Liu, D. Cai. Sparse Fuse Dense: Towards High
Quality 3D Detection with Depth Completion, CVPR 2022.
• Q. Xu, Y. Zhong, U. Neumann. Behind the Curtain: Learning Occluded Shapes for 3D Object Detection.
AAAI 2022.
References
62
© 2023 Sensor Cortek inc
Reading list on LiDAR and AI

More Related Content

DOCX
Texto base de los mapas mentales
PPT
własności czworokątów
PPTX
PPTX
PPTX
LIDAR- Modern techniques in Surveying.pptx
PPTX
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
PDF
Introduction-to-LiDAR-Sensor.pptx.for engineering
PPTX
Soft Robotics and Flexible Structures for Robots.pptx
Texto base de los mapas mentales
własności czworokątów
LIDAR- Modern techniques in Surveying.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Introduction-to-LiDAR-Sensor.pptx.for engineering
Soft Robotics and Flexible Structures for Robots.pptx

Similar to “Introduction to Modern LiDAR for Machine Perception,” a Presentation from the University of Ottawa (20)

PPTX
PPTX
Lidar : light detection and rangeing
PPTX
Lidar and sensing
PPTX
Lidar final ppt
PPTX
LIDAR - What it is, How it works, and How it is used in Robotics.pptx
PPTX
LIDAR - What it is, How it works, and How it is used in Robotics.pptx
PDF
What is LiDAR_ A Guide to its Technical Aspects.pdf
PPTX
PPTX
PPTX
Lidar- light detection and ranging
PPTX
LiDAR.pptx
PPTX
lidar light detection and ranging working
PPTX
Lidarpptbysharath copy-140316044543-phpapp01
PPTX
PDF
Differentiation between primary and secondary LIDAR system of Remote Sensing
PPTX
Lidar technology and it’s applications
PDF
All about LiDAR Technology - Complete Guide.pdf
PPTX
lidarfinalppt-171111172427 2.pptx lidar uses and applications
Lidar : light detection and rangeing
Lidar and sensing
Lidar final ppt
LIDAR - What it is, How it works, and How it is used in Robotics.pptx
LIDAR - What it is, How it works, and How it is used in Robotics.pptx
What is LiDAR_ A Guide to its Technical Aspects.pdf
Lidar- light detection and ranging
LiDAR.pptx
lidar light detection and ranging working
Lidarpptbysharath copy-140316044543-phpapp01
Differentiation between primary and secondary LIDAR system of Remote Sensing
Lidar technology and it’s applications
All about LiDAR Technology - Complete Guide.pdf
lidarfinalppt-171111172427 2.pptx lidar uses and applications
Ad

More from Edge AI and Vision Alliance (20)

PDF
“The New OpenCV 5.0: Added Features, Performance Improvements and Future Dire...
PDF
“Introduction to Shrinking Models with Quantization-aware Training and Post-t...
PDF
“Customizing Vision-language Models for Real-world Applications,” a Presentat...
PDF
“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advan...
PDF
“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,”...
PDF
“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceler...
PDF
“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applicat...
PDF
“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
PDF
“Quantization Techniques for Efficient Deployment of Large Language Models: A...
PDF
“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation fr...
PDF
“Introduction to Radar and Its Use for Machine Perception,” a Presentation fr...
PDF
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
PDF
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
PDF
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
PDF
“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation ...
PDF
“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effe...
PDF
“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, ...
PDF
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
PDF
“A Re-imagination of Embedded Vision System Design,” a Presentation from Imag...
“The New OpenCV 5.0: Added Features, Performance Improvements and Future Dire...
“Introduction to Shrinking Models with Quantization-aware Training and Post-t...
“Customizing Vision-language Models for Real-world Applications,” a Presentat...
“An Introduction to the MIPI CSI-2 Image Sensor Standard and Its Latest Advan...
“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,”...
“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceler...
“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applicat...
“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
“Quantization Techniques for Efficient Deployment of Large Language Models: A...
“Introduction to Data Types for AI: Trade-offs and Trends,” a Presentation fr...
“Introduction to Radar and Its Use for Machine Perception,” a Presentation fr...
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation ...
“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effe...
“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, ...
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
“A Re-imagination of Embedded Vision System Design,” a Presentation from Imag...
Ad

Recently uploaded (20)

PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
A comparative study of natural language inference in Swahili using monolingua...
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
Modernising the Digital Integration Hub
PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
Hybrid model detection and classification of lung cancer
PDF
Getting Started with Data Integration: FME Form 101
PPTX
observCloud-Native Containerability and monitoring.pptx
PDF
Developing a website for English-speaking practice to English as a foreign la...
PDF
1 - Historical Antecedents, Social Consideration.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PPTX
O2C Customer Invoices to Receipt V15A.pptx
PDF
STKI Israel Market Study 2025 version august
A contest of sentiment analysis: k-nearest neighbor versus neural network
NewMind AI Weekly Chronicles – August ’25 Week III
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
Assigned Numbers - 2025 - Bluetooth® Document
A comparative study of natural language inference in Swahili using monolingua...
Programs and apps: productivity, graphics, security and other tools
Modernising the Digital Integration Hub
Hindi spoken digit analysis for native and non-native speakers
Hybrid model detection and classification of lung cancer
Getting Started with Data Integration: FME Form 101
observCloud-Native Containerability and monitoring.pptx
Developing a website for English-speaking practice to English as a foreign la...
1 - Historical Antecedents, Social Consideration.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
OMC Textile Division Presentation 2021.pptx
Zenith AI: Advanced Artificial Intelligence
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
O2C Customer Invoices to Receipt V15A.pptx
STKI Israel Market Study 2025 version august

“Introduction to Modern LiDAR for Machine Perception,” a Presentation from the University of Ottawa

  • 1. Introduction to Modern LiDAR for Machine Perception Robert Laganière CEO / Professor Sensor Cortek inc. / University of Ottawa Canada
  • 2. • LiDAR fundementals • LiDAR principles • LiDAR taxonomy • LiDAR processing (AI) Presentation outline 2 © 2023 Sensor Cortek inc
  • 3. • LiDAR (or CoLiDAR) name derived from the RADAR acronym • LiDAR refers to the technology that uses a laser to sense the environment • Same fundamental principle as radar (or sonar) • Emitting a signal and analyzing the bounced back signal (the echo) LiDAR: Light Detection and Ranging 3 © 2023 Sensor Cortek inc
  • 4. Main benefit of using coherent collimated light beam (i.e., laser) 4 © 2023 Sensor Cortek inc radar signal laser light • The laser light used by LiDAR has two interesting properties: • It is coherent: all emitted light rays have the same frequency and phase • It is collimated: the beam of light has parallel rays and spread minimally
  • 5. LiDAR sequence 5 © 2023 Sensor Cortek inc
  • 6. Corresponding camera sequence 6 © 2023 Sensor Cortek inc did you perceive the same things?
  • 7. • It produces direct 3D information • It provides accurate 3D measurements • From under a mm to few cm depending on the distance • It’s an active sensor that operates day and night • It can capture information at long range • ~200 m • It has a large FoV • even 360o What makes LiDAR an attractive sensor? 7 © 2023 Sensor Cortek inc
  • 8. • It captures shape but not appearance • It produces sparse data • Sometimes only few points on an object • It becomes noisy under fog, snow and rain • It is still an expensive sensor • Some are thousands of $ • While radars and cameras could be less than $100 • It often includes mechanical parts However, LiDAR is not perfect 8 © 2023 Sensor Cortek inc
  • 9. • A collection of sparse 3D points • A LiDAR frame • Points not captured at exactly the same time…. The LiDAR point cloud 9 © 2023 Sensor Cortek inc
  • 10. • 1900s: Planck, Einstein and others “discovered” the photon • May 16th 1960: first laser light produced (T. Maiman) • A laser is a device that generates an intense beam of coherent monochromatic light • Light Amplification by Stimulated Emission of Radiation • Laser differs from other light sources because it emits coherent collimated light • Laser light is very narrow, making it possible to see the smallest details with high resolution at relatively long distances. History of Laser 10 © 2023 Sensor Cortek inc
  • 11. • 1960s: LiDAR initially developed for metrology and atmospheric research • But the idea of probing the atmosphere with light can be dated back to the 1930s • 1970s: LiDAR used for terrain mapping • Apollo missions used laser to accurately measure Earth-Moon distances • 1980s: with the advent of GPS and inertial measurement units (IMUs), LiDAR became very accurate • 2005: first AV to complete the DARPA Grand Challenge (142 mile desert course) was equipped with a LiDAR History of LiDAR © 2023 Sensor Cortek inc Mt St-Helens Wikimedia commons 11
  • 12. 1. The LiDAR emits pulsed light waves 2. The light potentially hits a surrounding object and bounces back to the sensor 3. The sensor reads the bounced signal, estimates the time it took to return to the LiDAR and measures the reflected light energy • Simple technology; almost instantaneous! • Potential interference from the sun and other LiDARs How does LiDAR work? Pulsed LiDAR 12 © 2023 Sensor Cortek inc
  • 13. How does LiDAR work? 13 © 2023 Sensor Cortek inc blind region 1. 3. 2. This configuration is called bistatic (most common and cheaper) A monostatic optical system aligns the Tx and Rx for better detection
  • 14. 1. LiDAR can also use continuous waves • Frequency Modulated Continuous Wave 2. The phase of the bounced back signal will differ from the emitted signal 3. The change in phase is used to extract the distance information • This is done by mixing the emitted and received signals (as done in radar) • Velocity is a bonus! • Virtually no interference • You must read a longer signal (stay longer at each point) How does LiDAR work? FMCW Lidar 14 © 2023 Sensor Cortek inc
  • 15. 15 © 2023 Sensor Cortek inc How does LiDAR work? FMCW Lidar www.bridgerphotonics.com/blog/frequency- modulated-continuous-wave-fmcw-lidar
  • 16. • LiDAR physics is governed by one simple equation Distance of the object = (Speed of Light x Time of Flight) / 2 • But to be able to read the received light, you need power (i.e., enough photons bouncing back) Power received ≈ Power transmitted x Cross Section x Optic area Distance2 Distance2 • The laser cross section is the average amount of optical power returned by the target LiDAR mathematics 16 © 2023 Sensor Cortek inc
  • 17. LiDAR sensor taxonomy © 2023 Sensor Cortek inc LiDAR Scanning LiDAR Mechanical Scanners Rotating mirrors Rotating prisms MEMS mirrors Electronic Scanners Optical Phased Array Imaging LiDAR Flash LiDAR Solid-state LiDAR 17
  • 18. • Three strategies: • Scanning LiDAR • A laser scans the scene and a single photodetector is used to read the returned photons • Flash LiDAR • The entire field of view is illuminated and a photodetector array captures the received photons • Optical Phased Arrays • Several transmitters emitting laser light at different phases enabling the steering of the beam (constructive/destructive interference) Capturing a scene with LiDAR 18 © 2023 Sensor Cortek inc
  • 19. • Shorter range – higher frame rate • Costly focal plane array • Limited FoV • Light is distributed across the FoV – more noisy • Pixels are small – more power required • Angular resolution determined by the pixel density • No motion distortion • No moving parts Flash LiDAR 19 © 2023 Sensor Cortek inc
  • 20. • Longer range – smaller frame rate • Expensive scanning mechanism • e.g., spinning mirrors, MEMS mirrors • or rotate everything • Can be bulky • LiDAR motion must be compensated • Less tolerant to mechanical vibrations Scanning LiDAR 20 © 2023 Sensor Cortek inc
  • 21. • Today’s most popular solution • Heavier than other solutions • Vulnerable to vibrations • Generally includes stack of photodetectors to scan in several horizontal layers Scanning with rotating mirrors (or rotate everything) 21 © 2023 Sensor Cortek inc precisionlaserscanning.com/2017/12/mems-mirrors-vs-polygon- scanners-for-lidar-in-autonomous-vehicles/
  • 22. • Uses two (or more) sequential prisms to steer the beam • Shape of prism and speed of rotation determines the scan pattern • Limited FoV Scanning with rotating prisms 22 © 2023 Sensor Cortek inc Lidars for vehicles: from the requirements to the technical evaluation, Z. Dai et al., Conference: 9th International Forum on Automotive Lighting, 2021
  • 23. • Micro-Electro-Mechanical System • Quasi solid-state • Programmable scan patterns • Limited FoV • Requires careful calibration Scanning with MEMS mirrors 23 © 2023 Sensor Cortek inc preciseley.com/product/mems-scanning-mirror/
  • 24. • Lower resolution – high frame rate • Solid state • No moving parts • Smart zooming capability • Interference from the antenna lobes limits the angular resolution • Complex design • Lower production cost • Probably the solution of the future Optical phased array 24 © 2023 Sensor Cortek inc MEMS Mirrors for LiDAR: A Review, D. Wang, C. Watkins, H. Xie. Micromachines 2020, 11(5)
  • 25. Rotating Prisms MEMS OPA Flash Range Long Long Long Medium Low Frame rate Low Low Low High High FoV Large Limited Limited Limited Limited Resolution High High High Adaptive Low Power High High Low Low High Solid Sate No No Quasi Yes Yes Vulnerability High High High Low Low Complexity Low Low High High Low Cost High High High Low High Lidar technologies 25 © 2023 Sensor Cortek inc
  • 26. • LiDAR operates in the near IR spectrum • 780 nm to 3000 nm • Be careful about eye safety! • Typical field of view: • 90°, 180° or 360° • Depth resolution determined by the temporal sampling frequency • ∆D = c / 2f • e.g., a depth resolution of 1 cm requires a 1.5 GHz sampling rate Some LiDAR specs 26 © 2023 Sensor Cortek inc • Angular resolution determined by the scanning point rate • e.g., 0.1° corresponds to 18 cm at 100 m • Pulse frequency • e.g., 2 ns corresponds to a range resolution of 3 cm • Pulse frequency determines the number of points per second per layer • e.g., 40 kHz Specs are usually given for 80% Lambertian reflectivity
  • 27. • When the vehicle moves, the LiDAR is scanning a moving scene • Which will distort the point cloud • This distortion is proportional to the vehicle’s speed and inversely proportional to laser scanning rate • Solution: using an inertial measurement unit in order to compensate for the sensor motion • IMUs provide acceleration and angular velocity • Assumption: the ego vehicle motion has constant angular and linear velocities LiDAR motion compensation using IMU 27 © 2023 Sensor Cortek inc
  • 28. LiDAR motion compensation using IMU 28 © 2023 Sensor Cortek inc https://www.mathworks.com/help/lidar/ug/m otion-compensation-in-lidar-point-cloud.html Before motion compensation After motion compensation
  • 29. • A LiDAR sensor produces a frame of 3D points • A point cloud • This point cloud has to be processed and analyzed in order to interpret the scene • e.g., detect objects on the road • But LiDAR data is sparse and unstructured • Signal processing (convolution) prefers regular grid sampling • LiDAR needs to be preprocessed to build a more suitable representation Processing LiDAR data for detection 29 © 2023 Sensor Cortek inc
  • 30. • Possible representations: • Point sets • Voxelization • Bird’s eye view • Point pillars encoding • Frontal image generation • Sparse convolution LiDAR representation strategies 30 © 2023 Sensor Cortek inc
  • 31. • Point sets • Can we work directly on the point cloud? • Point-based representation • Voxelization • Bird’s eye view • Point pillars encoding • Frontal image generation • Sparse convolution LiDAR representations 31 © 2023 Sensor Cortek inc
  • 32. • The idea is to consume an unordered set of 3D points • Transformations are learned to normalize the data • Global point features are learned from the set • Point cloud has to be segmented into small regions of interest • More difficult to apply in a complex scene composed of many objects Point-based representations 32 © 2023 Sensor Cortek inc C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR) 2017
  • 33. Point-based representations 33 © 2023 Sensor Cortek inc www.mdpi.com/1999-4907/12/2/131/htm • The idea is to consume an unordered set of 3D points
  • 34. • Point sets • Voxelization • e.g., occupancy grid • Bird’s eye view • Point pillars encoding • Frontal image generation • Sparse convolution LiDAR representations 34 © 2023 Sensor Cortek inc Attribute Filtering of Urban Point Clouds Using Max-Tree on Voxel Data, F. Guillotte, Mathematical Morphology and Its Applications to Signal and Image Processing, 2019.
  • 35. • A 3D voxel grid is created in which each voxel contains: • A scalar value • A vector made of statistics computed from the points inside the voxel • Mean, variance, reflectance, … • By nature, the occupied voxels are very sparse • 3D convolution expensive and inefficient Voxelization 35 © 2023 Sensor Cortek inc Wang, D.Z.; Posner, I. Voting for voting in online point cloud object detection. In Proceedings of the Robotics: Science and Systems, Rome, Italy, 13–17 July 2015
  • 36. • Voxel feature encoding (VFE) is used • from randomly sampled points in each voxel • using point coordinates and reflectance • and a fully connected network transformation • 3D convolution is applied on the VFE Voxelization: VoxelNet 36 © 2023 Sensor Cortek inc Y. Zhou and O. Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In CVPR, 2018 VFE learning Convolution layer Detection network
  • 37. Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Vehicle Detection results – KITTI Dataset 37 © 2023 Sensor Cortek inc
  • 38. • Point sets • Voxelization • Bird’s eye view • Point pillars encoding • Frontal image generation • Sparse comvolution LiDAR representations 38 © 2023 Sensor Cortek inc
  • 39. • Computationally efficient • Preserve the metric space for objects on the road • The representation produces an image • Height becomes a channel • Image detection network can be used • Objects are not occluded Bird’s eye view network : Pixor 39 © 2023 Sensor Cortek inc Yang, B., Luo, W., Urtasun, R.: PIXOR: real-time 3D object detection from point clouds. CVPR (2018)
  • 40. Vehicle Detection results – KITTI Dataset 40 © 2023 Sensor Cortek inc Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Pixor 81.7 77.05 72.95
  • 41. • Voxelization • Bird’s eye view • Pillars encoding • Each pillar encodes point distance to centroid and reflectance • Simplified PointNet is used • Frontal image generation • Sparse convolution LiDAR representations 41 © 2023 Sensor Cortek inc becominghuman.ai/pointpillars-3d-point-clouds-bounding-box- detection-and-tracking-pointnet-pointnet-lasernet-67e26116de5a
  • 42. Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Pixor 81.7 77.05 72.95 PointPillar 92.07 87.74 86.65 Vehicle Detection results – KITTI Dataset 42 © 2023 Sensor Cortek inc
  • 43. • Voxelization • Bird’s eye view • Point pillars encoding • Frontal image generation • Sparse convolution LiDAR representations 43 © 2023 Sensor Cortek inc
  • 44. • BEV + LiDAR project + Camera • BEV is used to propose potential objects • Multiview features are then uses to predict objects Object-based feature extractor: MV3D 44 © 2023 Sensor Cortek inc Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017
  • 45. Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Pixor 81.7 77.05 72.95 PointPillar 92.07 87.74 86.65 MV3D 86.55 78.1 76.67 Vehicle Detection results – KITTI Dataset 45 © 2023 Sensor Cortek inc
  • 46. Frontal view fusion 46 © 2023 Sensor Cortek inc • All images are fused together • Including the camera image • Lidar/camera fusion • BGF fusion operator Sensor Fusion Operators for Multimodal 2D Object Detection, M.M. Pasandi, T. Liu, Y. Massoud, R, Laganiere, ISVC 2022
  • 47. Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Pixor 81.7 77.05 72.95 PointPillar 92.07 87.74 86.65 MV3D 86.55 78.1 76.67 BGF Fusion 94.90 88.40 78.38 Vehicle Detection results – KITTI Dataset 47 © 2023 Sensor Cortek inc
  • 48. • Voxelization • Bird’s eye view • Pillars encoding • Frontal image generation • Sparse convolution • When the data is very sparse, regular convolution becomes very inefficient • The idea is to compress the representation by ignoring zero values • To this end, look-up tables are often used LiDAR representations 48 © 2023 Sensor Cortek inc
  • 49. SECOND: Sparse convolution on point features 49 © 2023 Sensor Cortek inc SECOND: Sparsely Embedded Convolutional Detection by Yan Yan, Yuxing Mao, Bo Li, Sensors 2018.
  • 50. Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Pixor 81.7 77.05 72.95 PointPillar 92.07 87.74 86.65 MV3D 86.55 78.1 76.67 BGF Fusion 94.90 88.40 78.38 SECOND 91.92 87.92 85.39 Vehicle Detection results – KITTI Dataset 50 © 2023 Sensor Cortek inc
  • 51. PV-RCNN: Point-based + Voxels 51 © 2023 Sensor Cortek inc PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection (2020) Shaoshuai Shi, Chaoxu Guo, Li Jiang, Zhe Wang, Jianping Shi, Xiaogang Wang, Hongsheng Li, IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • 52. Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Pixor 81.7 77.05 72.95 PointPillar 92.07 87.74 86.65 MV3D 86.55 78.1 76.67 BGF Fusion 94.90 88.40 78.38 SECOND 91.92 87.92 85.39 PV-RCNN 92.86 88.93 88.74 Vehicle Detection results – KITTI Dataset 52 © 2023 Sensor Cortek inc
  • 53. • Solving the LiDAR sparsity problem • The farther the object, the sparser the point density How to further improve detection? 53 © 2023 Sensor Cortek inc
  • 54. • Depth completion network • Can be used to densify the point cloud Pseudo LiDAR 54 © 2023 Sensor Cortek inc www.mdpi.com/1424-8220/22/18/6969 Depth Completion with Twin Surface Extrapolation at Occlusion Boundaries, Saif Imran, Xiaoming Liu, Daniel Morris, CVPR 2021
  • 55. BTC: using shape completion 55 © 2023 Sensor Cortek inc Behind the Curtain: Learning Occluded Shapes for 3D Object Detection, Qiangeng Xu, Yiqi Zhong, Ulrich Neumann, AAAI 2022
  • 56. Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Pixor 81.7 77.05 72.95 PointPillar 92.07 87.74 86.65 MV3D 86.55 78.1 76.67 BGF Fusion 94.90 88.40 78.38 SECOND 91.92 87.92 85.39 PV-RCNN 92.86 88.93 88.74 BTC 93.46 89.53 87.44 Vehicle Detection results – KITTI Dataset 56 © 2023 Sensor Cortek inc
  • 57. LiDAR and pseudo-LiDAR: SFD 57 © 2023 Sensor Cortek inc • Lidar/camera fusion Sparse Fuse Dense: Towards High Quality 3D Detection with Depth Completion, Wu, Xiaopei and Peng, Liang and Yang, Honghui and Xie, Liang and Huang, Chenxi and Deng, Chengqi and Liu, Haifeng and Cai, Deng}, 2022
  • 58. Mean Average Precision (%) Easy Medium Hard VoxelNet 89.35 79.26 77.39 Pixor 81.7 77.05 72.95 PointPillar 92.07 87.74 86.65 MV3D 86.55 78.1 76.67 BGF Fusion 94.90 88.40 78.38 SECOND 91.92 87.92 85.39 PV-RCNN 92.86 88.93 88.74 BTC 93.46 89.53 87.44 SFD 95.85 91.92 91.41 58 © 2023 Sensor Cortek inc Vehicle Detection results – KITTI Dataset
  • 59. LiDAR densification: UYI 59 © 2023 Sensor Cortek inc • Pseudo LiDAR module Use Your Imagination: A Detector-Independent Approach For LiDAR Quality Booster, Z. Zhang, T. Liu, R. Laganiere, 2023 LiDAR detector network
  • 60. LiDAR densification: UYI 60 © 2023 Sensor Cortek inc • Vehicle detection performance boosting Use Your Imagination: A Detector-Independent Approach For LiDAR Quality Booster, Z. Zhang, T. Liu, R. Laganiere, 2023
  • 61. Conclusion 61 © 2023 Sensor Cortek inc • LiDAR provides accurate 3D detection • An essential component in ADAS/AV • LiDAR technology will continue to evolve • Improve density • Lower power • Solid-state • Lower cost • Radar / LiDAR Convergence
  • 62. • C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. CVPR 2017. • Yang, B., Luo, W., Urtasun, R.: PIXOR: real-time 3D object detection from point clouds. CVPR 2018. • X. Chen, H. Ma, J. Wan, B. Li, T. Xia, Multi-view 3d object detection network for autonomous driving. CVPR 2017. • Yan Yan, Yuxing Mao, Bo Li. SECOND: Sparsely Embedded Convolutional Detection. Sensors 2018. • S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, H. Li. PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. CVPR 2020. • X. Wu, L. Peng, H. Yang, L. Xie, C. Huang, C. Deng, H. Liu, D. Cai. Sparse Fuse Dense: Towards High Quality 3D Detection with Depth Completion, CVPR 2022. • Q. Xu, Y. Zhong, U. Neumann. Behind the Curtain: Learning Occluded Shapes for 3D Object Detection. AAAI 2022. References 62 © 2023 Sensor Cortek inc Reading list on LiDAR and AI