5. Voxelベースの手法 (1/2)
[Maturana2015]Maturana, D., & Scherer, S. (2015).VoxNet:
A 3D Covolutional Neural Network for Real-Time Object
Recognition. In International Conference on Intelligent Robots
and Systems.
[Li2017]Li, B. (2017). 3D fully convolutional network for
vehicle detection in point cloud. IEEE International
Conference on Intelligent Robots and Systems
[Zeng2015]Zeng Wang, D., & Posner, I. (2015).Voting for
Voting in Online Point Cloud Object Detection. Robotics:
Science and Systems XI.
6. Voxelベースの手法 (2/2)
[Engelcke2017]Engelcke, M., Rao, D.,Wang, D. Z.,Tong, C.
H., & Posner, I. (2017).Vote3Deep: Fast object detection in
3D point clouds using efficient convolutional neural
networks. IEEE International Conference on Robotics and
Automation, (September),
[Zhou2018]Zhou,Y., & Tuzel, O. (2018).VoxelNet: End-to-
End Learning for Point Cloud Based 3D Object Detection.
In Conference on ComputerVision and Pattern Recognition.
[Yan2018]Yan,Y., Mao,Y., & Li, B. (2018). SECOND: Sparsely
Embedded Convolutional Detection. Sensors, 18(10)
8. [Zeng2015]Voting for Voting (1/2)
入力点群(+反射率)を
Voxel化し、3D Sliding
Windowで物体検出
各VoxelごとにHand-Crafted
特徴量(Grid内に点が存在
するか、反射率平均、反射
率分散、3種のShape Factor*
の計6種)を算出し、Sliding
Window内でそれらを結合し、
線形SVMで判別
N個の向きに対して演算
入力点群 Voxel化
Voxel特徴ベクトル
3D Sliding Window
*C.-F.Westin, S. Peled, H. Gudbjartsson, R. Kikinis, and F.
A. Jolesz,“Geometrical Diffusion Measures for MRI
fromTensor Basis Analysis,” in ISMRM ’97,Vancouver
Canada,April 1997, p. 1742.
9. [Zeng2015]Voting for Voting (2/2)
SlidingWindow + 線形SVMは畳み込み演算とみなせ、入
力が疎な場合、投票で高速処理
a. 赤、緑、水色の個所にのみ点群が存在する場合、Window
のアンカー(青)上のスコアはこれらの重み付き線形和であ
らわされる
b. データの存在する個所(赤)は青位置のアンカーに投票する
17. Bird’s Eye Viewベースの手法
[Yang2018]Yang, B., Luo,W., & Urtasun, R. (2018). PIXOR: Real-time
3D Object Detection from Point Clouds. In IEEE conference on
ComputerVision and Pattern Recognition
[Luo2018]Luo,W.,Yang, B., & Urtasun, R. (2018). Fast and Furious: Real
Time End-to-End 3D Detection,Tracking and Motion Forecasting
with a Single Convolutional Net. In Conference on ComputerVision
and Pattern Recognition.
[Ren2018]Ren, M., Pokrovsky,A.,Yang, B., & Urtasun, R. (2018). SBNet:
Sparse Blocks Network for Fast Inference. In IEEE Conference on
ComputerVision and Pattern Recognition (pp. 8711–8720).
[Yang2018_2]Yang, B., Liang, M., & Urtasun, R. (2018). HDNET :
Exploiting HD Maps for 3D Object Detection. In Conference on Robot
Learning (pp. 1–10).
[Simon2018]Simon, M., Milz, S.,Amende, K., & Gross, H. (2018).
Complex-YOLO:An Euler-Region-Proposal for Real-time 3D Object
Detection on Point Clouds.ArXiv, arXiv:1803.
26. その他の手法
[Li2016]Li, B., Zhang,T., & Xia,T. (2016).Vehicle Detection
from 3D Lidar Using Fully Convolutional Network.
Robotics Science and Systems.
[Kunisada2018]Kunisada,Y.,Yamashita,T., & Fujiyoshi, H.
(2018). Pedestrian-Detection Method based on 1D-CNN
during LiDAR Rotation. In International Conference on
IntelligentTransportation Systems (ITSC).
29. 実験と評価
ここで紹介した各論文についてKITTI Benchmark上で評
価された結果を記載します。
http://www.cvlibs.net/datasets/kitti
評価は3D、2D、Bird’s EyeViewそれぞれの物体検出タスクに
ついて行いました。
比較のために、カメラとLiDAR両方を使用した物体検出
手法であるF-PointNet*の結果も記載しました。
ここで紹介できなかった研究についても、上記サイトに記
載があり、かつ論文が入手可能なものについては記載し
ました。
番号の降られた手法が紹介しきれなった研究にあたります
ベンチマーク中の巻末の「紹介しきれなかった研究」の番号と
対応してます
*Qi, C. R., Liu,W.,Wu, C., Su, H., & Guibas, L. J. (2018). Frustum PointNets for 3D Object Detection from RGB-D Data.
In Conference on ComputerVision and Pattern Recognition.
45. 紹介しきれなかった研究(1/3)
1. Spinello, L.,Arras, K. O.,Triebel, R., & Siegward, R. (2010).A Layered
Approach to People Detection in 3D Range Data. In AAAI
Conference on Artificial Intelligence (pp. 1635-1630).
2. Teichman,A., & Thrun, S. (2011).Tracking-based semi-supervised
learning. In Robotics: Science and Systems.
3. Teichman,A., Levinson, J., & Thrun, S. (2011).Towards 3D object
recognition via classification of arbitrary object tracks. Proceedings
- IEEE International Conference on Robotics and Automation,
4034-4041.
4. Wang, D. Z., Posner, I., & Newman, P. (2012).What could move?
Finding cars, pedestrians and bicyclists in 3D laser data. Proceedings
- IEEE International Conference on Robotics and Automation,
4038-4044.
5. Behley, J., Steinhage,V., & Cremers,A. B. (2013). Laser-based Segment
Classification Using a Mixture of Bag-of-Words. In International
Conference on Intelligent Robots and Systems.
46. 紹介しきれなかった研究(2/3)
6. Asvadi,A., Garrote, L., Premebida, C., Peixoto, P., & Nunes, U. J.
(2017). DepthCN :Vehicle Detection Using 3D-LIDAR and
ConvNet. In International Conference on IntelligentTransportation
Systems (ITSC).
7. Zidan, M. I., & Sallab,A.A.Al. (2018).YOLO3D : End-to-end real-time
3D Oriented Object Bounding Box Detection Object Bounding
Box Detection from LiDAR, (August).
8. Feng, D., Rosenbaum, L.,Timm, F., & Dietmayer, K. (2018). Leveraging
Heteroscedastic Aleatoric Uncertainties for Robust Real-Time
LiDAR 3D Object Detection.ArXiv, arXiv:1809.
9. Yun, P.,Tai, L.,Wang,Y., & Liu, M. (2018). Focal Loss in 3D Object
Detection.ArXiv, arXiv:1809.
10. Feng, D., Rosenbaum, L., & Dietmayer, K. (2018).Towards Safe
Autonomous Driving: Capture Uncertainty in the Deep Neural
Network For Lidar 3DVehicle Detection. International Conference
on IntelligentTransportation Systems (ITSC).
47. 紹介しきれなかった研究(3/3)
11. Minemura, K., Liau, H., Monrroy,A., & Kato, S. (2018). LMNet : Real-
time Multiclass Object Detection on CPU using 3D LiDAR. In 3rd
Asia-Pacific Conference on Intelligent Robot Systems (ACIRS).
12. Gustafsson, F., & Linder-Norén, E. (2018). Automotive 3D Object
DetectionWithoutTarget Domain Annotations. Linköping University.
13. Zeng,Y., Hu,Y., Liu, S.,Ye, J., Han,Y., Li, X., & Sun, N. (2018). RT3D:
Real-Time 3DVehicle Detection in LiDAR Point Cloud for
Autonomous Driving. IEEE Robotics and Automation Letters, 3766(c),
14. Beltr, J., Guindel, C., Moreno, F. M., Cruzado, D., Garc, F., & Escalera,
A. De. (2018). BirdNet : a 3D Object Detection Framework from
LiDAR information. ArXiv, arXiv:1805.
15. Wirges, S., Fischer,T., & Stiller, C. (2018). Object Detection and
Classification in Occupancy Grid Maps using Deep Convolutional
Networks. ArXiv, arXiv:1805.