Icy Moon Surface Simulation and Stereo Depth Estimation for Sampling Autonomy CL24_2268.pdf
1. Icy Moon Surface Simulation and
Stereo Depth Estimation for Sampling Autonomy
Ramchander Bhaskara1, Georgios Georgakis2,
Jeremy Nash2, MarissaCameron2, Joseph Bowkett2,
Adnan Ansar2, Manoranjan Majji1,Paul Backes2
1Texas A&M University | 2Jet Propulsion Laboratory
2. Motivation
Engineering representation for icy-moon surface exploration at
sampling scale.
1. System design: Spectral understanding for surface autonomy
a. Imaging sensor designand selection
b. Computer vision algorithms
2. Analysis: Most challenging environments for visual perception
Will legacy vision pipelines be sufficient for perception?
2
3. Topics
1. Review: Challenges in icy-moon surface simulations
2. Software: Graphical Utility for Icy-Moon Surface Simulations
(GUISS)
3. Analysis: Stereo Depth Estimation
• Stereo matching evaluation under challengingvisual hypotheses
4. Conclusion & ongoing work: Are simulations enough?
3
5. 1. Review – the why.
5
• Hydrothermal activity –
Enceladus/Europa
• Europa Clipper
• Jupiter Icy Moons Explorer (JUICE)
• Decadal survey: Enceladus Orbilander Enceladus plume - Cassini
Source: JPL/NASA
SAR image mosaic
Source: JPL/NASA
6. 1. Review – the objectives.
6
Enceladus plume - Cassini
Source: JPL/NASA
SAR image mosaic
Source: JPL/NASA
• Sampling of plume materials
• Hard engineeringproblem
• Perception system goals:
• Recover topography(DEM) and
texture for site selection
• Fault identification
• Localization
• Excavation tracking
Perception systemdevelopment for sampling
autonomy
7. 1. Review – the challenges.
• Structural diversity
• Plain, dominated by ice blocks
• Surface disruptionsand faulting
• Largely unknownat high resolution
• Spectral diversity
• High albedo > 0.8 (Enceladus)
• Backscattering
• Crystalline and amorphousice
• Freshsnow, plumes, and salts
• Subsurface water
• Ice thickness: 3-30 kms
7
4m/pixel
Enceladus’South pole terrain (Cassini,
JPL/Caltech)
Structural Mapping of Enceladus (CassiniISS, JPL/NASA)
8. 1. Review – Literature.
• Scientific rendering for planetary surface simulations
8
Simulator Features
SurRender (Airbus) large scalesimulations.
NaRPA (Texas A&M) Limited to Lambertian models.
SurRender (Airbus) large scalesimulations, validation of radiometric and sensor models. Shader
limitations.
PANGU (ESA) Standard in vision based navigation.Limited availability.
SISPO(Univ. of Tartu) Image aberrations and environment models.
OceanWATERS (NASA) No systematictreatmentof reflectancemodels.
DUST (NASA) Lunar simulations at scalewith accurate lighting, DTMs integrated.
DARTS (JPL) SAELSim– focus is on lander dynamics.
22. 3. Depth estimation – Baselines.
22
• StereoBM1
• Block Matching
• # Disparities
• Sum of Absolute Differences(SAD)
• JPLV2
• Used on Mars Exploration Rover
• DSMNet3
• Deep-learningbased
• Improved generalizationto new
visual domains.
• IGEV4
• State-of-the-art.
1Scharstein and R. Szeliski,“A taxonomyandevaluation of dense two-frame
stereo correspondence algorithms,”International journal of computer vision,
vol. 47,pp. 7–42,2002.
2S.B.Goldberg, M.W. Maimone,andL. Matthies,“Stereo vision androver
navigationsoftwarefor planetary exploration,”in Proceedings, IEEE
aerospace conference, vol. 5.IEEE, 2002,pp.5–5.
3Zhang,X.Qi, R.Yang,V. Prisacariu,B. Wah,and P. Torr,“Domain-invariant
stereo matchingnetworks,”in Computer Vision–ECCV 2020:16thEuropean
Conference, Glasgow,UK, August 23–28,2020,Proceedings,Part II 16.
Springer, 2020,pp.420–439.
4G. Xu, X. Wang, X.Ding, andX. Yang,“Iterative geometry encoding volume for
stereo matching,”in Proceedings of the IEEE/CVF Conference on Computer
Vision andPattern Recognition,2023,pp. 21 91921 928.
23. 3. Depth estimation – Datasets.
23
• Procedural terrain variation
• Planar to highly ruggedterrains
• Texture variation
• UV maps and albedo maps
• BSDF parameters
• Albedo, specularity, roughness,
transmission,subsurfacefactor.
• Lighting variations
• Sun elevation angles
• Reconstructions
Total Images Reconstructe
d
Simulated
5020 3060 1960
24. 3. Depth estimation – Results.
24
• Qualitativeevaluation
• Deep learning methods
outperformclassicalmethods
• Trained on visually dissimilar
domains
• JPLV - comparable errorsfor real
reconstructedscenes.
25. 3. Depth estimation – Metrics.
26
• StereoBM: 87 cm & 10.1% gap on L1
and DoD metrics.
• 2.3% DOD result of IGEV
Performance evaluation on the datasets.
• L1 Errorof DSMNet and IGEV are 0.29
and 0.26 – errorsare still significant
• IGEV’s 0.03 si-RMSE (scale-invariant).
26. 3. Depth estimation – Metrics.
27
• Varying values of Albedo
• StereoBM and JPLV degrade
• Deep learning models exhibit robustnessto high brightness.
Metricswith varying values of albedo.
27. 3. Analysis – Runtimes.
28
Rendering runtime
• Reconstructed scenes: 3 seconds (Intel i7 CPU) per stereopair and truth (640 x
480)
Synthetic scenes:1 minute.
Stereo depth estimation runtime
• Intel i7 CPU, NVIDIA RTX 3080 GPU (resolution:640 x 480).