KITTI Vision Benchmark Suite;We take advantage of our autonomous driving platform Annieway to develop novel challenging real-world computer vision benchmarks. Our tasks of interest are: stereo, optical flow, visual odometry, 3D object detection and 3D tracking;http://www.cvlibs.net/datasets/kitti/;stereo,flow,odometry,tracking,detection,road,maps,city
Audi Autonomous Driving Dataset;We have published the Audi Autonomous Driving Dataset (A2D2) to support startups and academic researchers working on autonomous driving. Equipping a vehicle with a multimodal sensor suite, recording a large dataset, and labelling it, is time and labour intensive.;https://www.a2d2.audi/a2d2/en.html;semantic,cloud,segmentation,detection,road,maps,city
ApolloScape Dataset;Trajectory dataset, 3D Perception Lidar Object Detection and Tracking dataset including about 100K image frames, 80k lidar point cloud and 1000km trajectories for urban traffic. The dataset consisting of varying conditions and traffic densities which includes many challenging scenarios where vehicles, bicycles, and pedestrians move among one another.;http://apolloscape.auto/;stereo,flow,semantic,cloud,segmentation,detection,road,maps,city
Velodyne SLAM;Here, you can find two challenging datasets recorded with the Velodyne HDL64E-S2 scanner in the city of Karlsruhe, Germany.;http://www.mrt.kit.edu/z/publ/download/velodyneslam/dataset.html;
Daimler Urban Segmentation Dataset;The Daimler Urban Segmentation Dataset consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs with a resolution of 1024x440. 500 frames come with pixel-level semantic class annotations into 5 classes: ground, building, vehicle, pedestrian, sky. Dense disparity maps are provided as a reference, however these are not manually annotated but computed using semi-global matching.;http://www.6d-vision.com/scene-labeling
nuScenes dataset;The nuScenes dataset is a public large-scale dataset for autonomous driving developed by Aptiv Autonomous Mobility. By releasing a subset of our data to the public, Aptiv aims to support public research into computer vision and autonomous driving.;https://www.nuscenes.org/;
Velodyne SLAM;Here, you can find two challenging datasets recorded with the Velodyne HDL64E-S2 scanner in the city of Karlsruhe, Germany.;http://www.mrt.kit.edu/z/publ/download/velodyneslam/dataset.html;detection,images,city
Daimler Urban Segmentation Dataset;The Daimler Urban Segmentation Dataset consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs with a resolution of 1024x440. 500 frames come with pixel-level semantic class annotations into 5 classes: ground, building, vehicle, pedestrian, sky. Dense disparity maps are provided as a reference, however these are not manually annotated but computed using semi-global matching.;http://www.6d-vision.com/scene-labeling;stereo,labelling,detection,road,maps,city
nuScenes dataset;The nuScenes dataset is a public large-scale dataset for autonomous driving developed by Aptiv Autonomous Mobility. By releasing a subset of our data to the public, Aptiv aims to support public research into computer vision and autonomous driving.;https://www.nuscenes.org/;labelling,detection,road,maps,city