SLAM Challenge Dataset 2023

Advancing the field with a common benchmark

The benchmark is based on data from multiple active construction site environments, across multiple sessions and platforms. In addition to the handheld prototype from 2022, we introduce datasets from a tracked robot platform.

This year's challenge extends evaluation to multi-session: Robustly operate across locations and platforms to accurately aggregate map information for each construction site from multiple SLAM runs. Options exist for both single and multi-session SLAM submissions and shall feature separate leaderboards.

Site 1 / Handheld Rosbag Truth
Floor 0 22 GB 3DOF
Floor 1 18 GB 3DOF
Floor 2 18 GB 3DOF
Underground 32 GB 3DOF
Stairs 17 GB 3DOF
Site 2 / Robot Rosbag Truth
Parking 3x floors down 63 GB 3DOF
Floor 1 Large room 28 GB 3DOF
Floor 2 Large room – dark 32 GB 3DOF
Site 3 / Handheld Rosbag Truth
Underground 1 10 GB 3DOF
Underground 2 15 GB 3DOF
Underground 3 19 GB 3DOF
Underground 4 11 GB 3DOF
Additional Sequences Site 2 / Handheld Rosbag Truth
Central staircase 9 GB 3DOF
Floor 1 Large room 21 GB 3DOF
Floor 2 Large room – dark 14 GB 3DOF

Challenge sequences

Each location is a multisession SLAM group with overlap across sequences. However, each trajectory can also be evaluated as a single individual session. We provide calibration sequences for the Handheld and for the Robot.

To gain higher accuracy at Site 2, use the corresponding additional sequences to perform multi-device SLAM! Please note that the additional sequences at Site 2 contain rapidly flashing lights in their video streams, which may affect sensitive viewers-discretion is advised.

GitHub

Citation

When using this work in an academic context, please cite in the following manner:

@misc{nair2024hiltislamchallenge2023,
    author = {Ashish Devadas Nair and Julien Kindle and Plamen Levchev and Davide Scaramuzza},
    title = {{Hilti} {SLAM} Challenge 2023: Benchmarking Single + Multi-session {SLAM} across Sensor Constellations in Construction},
    year = {2024},
    eprint = {2404.09765},
    archivePrefix = {arXiv},
    primaryClass = {cs.RO},
    url = {https://arxiv.org/abs/2404.09765}
}

License

All datasets and benchmarks on this page are copyright by us and published under the Creative Commons Attribution NonCommercial ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license.

Contact

Any dataset-related questions and concerns can be raised as issues at github.com/Hilti-Research/hilti-slam-challenge-2023/issues

Other topics should be forwarded to challenge@hilti.com

Handheld Sensor Suite

Our sensor suite consists of a Sevensense Alphasense Core camera head with 5x 0.4MP global shutter cameras, and a Hesai PandarXT-32.

The sensors are mounted rigidly on an aluminium platform for handheld operation. The synchronization between the cameras is done by an FPGA. The cameras and the LIDAR are synchronized via PTP. The time between all sensors is aligned to within 1 ms. An external steel-pin is attached to the setup.

Robot Sensor Suite

The new robot-mounted sensor suite consists of a Robosense BPearl hemisphere lidar, an Xsens MTi-670 IMU and 4x OAK-D cameras. The sensors are mounted on a rigid frame for a drilling robot platform, drawing inspiration from Hilti's iconic Jaibot.

The sensors are synchronized by a combination of PTP and hardware triggering. The IMU and LiDAR clocks are aligned to within 1ms, whilst the IMU and camera clocks are within 2ms.

Calibration

The calibration file can be downloaded here. Alternatively, you can use the provided calibration sequence to compute your own calibration. The yaml-file for the calibration board can be downloaded here. Datasheets for the sensors:

Partners

The challenge is a collaboration between industry and academia.