PyCuVSLAM is the official Python wrapper for the NVIDIA cuVSLAM library, providing various Visual Tracking Camera modes and Simultaneous Localization and Mapping (SLAM) capabilities. Leveraging CUDA acceleration and a rich set of features, PyCuVSLAM delivers highly accurate, computationally efficient, real-time performance.
- 🛠️ System Requirements and Setup
- đź’» Examples and Guides
- 🤖 ROS2 Support
- 📚 API Documentation and Technical Report
- ⚙️ Performance and Troubleshooting
- ⚖️ License
- 🎓 Citation
PyCuVSLAM is supported on the following OS and platforms, with the system requirements and installation methods listed below:
OS | Architecture | System Requirements | Supported Installation Method |
---|---|---|---|
Ubuntu 22.04 (Desktop/Laptop) | x86_64 | Python 3.10, Nvidia GPU with CUDA 12.6 | Native, Venv, Conda, Docker |
Ubuntu 24.04 (Desktop/Laptop) | x86_64 | Nvidia GPU with CUDA 12.6 | Conda, Docker |
Ubuntu 22.04 (Nvidia Jetson) | aarch64 | Jetpack 6.1/6.2, Python 3.10, CUDA 12.6 | Native, Venv, Conda, Docker |
Make sure you have the CUDA Toolkit installed, you can download the toolkit from the NVIDIA website. If you install the CUDA toolkit for the first time, make sure to restart your computer.
Depending on your OS and platform and the supported installation method, follow the instructions below to environment setup and PyCuVSLAM installation.
Note: to correctly clone PyCuVSLAM binaries, Git LFS is required before cloning the repository. Please install it by running:
sudo apt-get install git-lfs
Important: This option is only available for Ubuntu 22.04 x86_64 and Jetpack 6.1/6.2 aarch64.
There are no special instructions for native install, proceed to PyCuVSLAM installation.
Important: This option is only available for Ubuntu 22.04 x86_64 and Jetpack 6.1/6.2 aarch64.
Create a virtual environment:
python3 -m venv .venv
source .venv/bin/activate
Proceed to PyCuVSLAM installation
Important: This option has been tested on Ubuntu 22.04 x86_64 and Ubuntu 24.04 x86_64
Create a conda environment and install the required packages:
conda create -n pycuvslam python==3.10 pip
conda activate pycuvslam
conda install -c conda-forge libstdcxx-ng
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH
Note: for ubuntu 22.04 use
libstdcxx-ng=12.2.0
version
The LD_LIBRARY_PATH
environment variable must be set every time you activate the
conda environment to ensure that the correct libpython
library is loaded.
Proceed to PyCuVSLAM installation
PyCuVSLAM provides Docker support for both x86_64 and Jetson platforms with RealSense camera integration.
-
Setup NGC (NVIDIA GPU Cloud):
docker login nvcr.io --username '$oauthtoken'
For password, enter your NGC API key from: https://org.ngc.nvidia.com/setup/api-keys
-
Clone the repository:
git clone https://github.com/NVlabs/pycuvslam.git cd pycuvslam
For x86_64 (Desktop/Laptop):
-
Build the x86 Docker image:
docker build -f docker/Dockerfile.realsense-x86 -t pycuvslam:realsense-x86 .
-
Run the x86 container:
bash docker/run_docker_x86.sh
For Jetson (aarch64):
-
Build the Jetson Docker image:
docker build -f docker/Dockerfile.realsense-jetson -t pycuvslam:realsense-jetson .
-
Run the Jetson container:
bash docker/run_docker_jetson.sh
Features:
- CUDA 12.6.1 support (Ubuntu 22.04)
- RealSense camera integration with librealsense
- X11 forwarding for GUI applications
- Automatic pycuvslam package installation
- USB device passthrough for camera access
-
Clone the PyCuVSLAM repository.
git clone https://github.com/NVlabs/pycuvslam.git cd pycuvslam
-
Install the PyCuVSLAM package.
pip install -e bin/x86_64
For Jetson, use the following command:
pip install -e bin/aarch64
-
Install PyCuVSLAM using one of the installation methods mentioned above, and then install the required packages for the examples:
pip install -r examples/requirements.txt
Explore various examples to quickly get started with PyCuVSLAM:
-
Monocular Visual Odometry
-
Monocular-Depth Visual Odometry
-
Stereo Visual Odometry
-
Stereo Visual-Inertial Odometry
-
Multi-Camera Stereo Visual Odometry
-
Distorted Images
-
Image Masking
If you would like to use cuVSLAM in a ROS2 environment, please refer to the following links:
-
For detailed API documentation, please visit the PyCuVSLAM API Documentation
-
For technical details on the cuVSLAM algorithms, validation, and benchmarking results, refer to our Technical Report
cuVSLAM is a highly optimized visual tracking library validated across numerous public datasets and popular robotic camera setups. For detailed benchmarking and validation results, please refer to our technical report.
The accuracy and robustness of cuVSLAM can be influenced by several factors. If you experience performance issues, please check your system against these common causes:
-
Hardware Overload: Hardware overload can negatively impact visual tracking, resulting in dropped frames or insufficient computational resources for cuVSLAM. Disable intensive visualization or image-saving operations to improve performance. For expected performance metrics on Jetson embedded platforms, see our technical report
-
Intrinsic and Extrinsic Calibration: Accurate camera calibration is crucial. Ensure your calibration parameters are precise. For more details, refer to our guide on image undistortion. If you're new to calibration, consider working with an experienced vendors
-
Synchronization and Timestamps: Accurate synchronization significantly impacts cuVSLAM performance. Make sure multi-camera images are captured simultaneously—ideally through hardware synchronization—and verify correct relative timestamps across cameras. Refer to our multi-camera hardware assembly guide for building a rig with synchronized RealSense cameras
-
Frame Rate: Frame rate significantly affects performance. The ideal frame rate depends on translational and rotational velocities. Typically, 30 FPS is suitable for most "human-speed" motions. Adjust accordingly for faster movements
-
Resolution: Image resolution matters. VGA resolution or higher is recommended. cuVSLAM efficiently handles relatively high-resolution images due to CUDA acceleration
-
Image Quality: Ensure good image quality by using suitable lenses, correct exposure, and proper white balance to avoid clipping large image regions. For significant distortion or external objects within the camera's field of view, please refer to our guide on static masking
-
Motion Blur: Excessive motion blur can negatively impact tracking. Ensure that exposure times are short enough to minimize motion blur. If avoiding motion blur isn't feasible, consider increasing the frame rate or try the following Mono-Depth or Stereo Inertial tracking modes
Q: When trying to run examples I get ImportError: pycuvslam/cuvslam/x86/cuvslam/pycuvslam.so: invalid ELF header
A: You need Git LFS to correctly pull binary files:
sudo apt-get install git-lfs
# in the repo directory:
git lfs install
git lfs pull
Q: Can I run PyCuVSLAM with Python 3.x?
A: We are working on supporting wider range of systems, but current version is only built for Python 3.10. We recommend using Docker or Conda for now.
Are you having problems running PyCuVSLAM? Do you have any suggestions? We'd love to hear your feedback in the issues tab.
This project is licensed under a non-commercial NVIDIA license, for details refer to the LICENCE file.
If you find this work useful in your research, please consider citing:
@article{korovko2025cuvslam,
title={cuVSLAM: CUDA accelerated visual odometry and mapping},
author={Alexander Korovko and Dmitry Slepichev and Alexander Efitorov and Aigul Dzhumamuratova and Viktor Kuznetsov and Hesam Rabeti and Joydeep Biswas and Soha Pouya},
year={2025},
eprint={2506.04359},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2506.04359},
}