Skip to content

Commit

Permalink
nuScenes v2.0 - Official code release (#458)
Browse files Browse the repository at this point in the history
* Merge lidarseg_v1.0 into nuscenes_v2.0 (#434)

* nuScenes-lidarseg (#343)

* initial commit for NuScenes-lidarseg

* add function to list lidarseg categories and create colormap

* integrated lidarseg function into nuscenes

* print number of lidarseg annotations

* Render lidarseg labels in image

* Improve error message when pointsensor is not lidar

* Calculate stats for lidarseg

* Assertion for nsweeps to display lidarseg labels

* Render only points in image which are labeled with classes the user chooses

* Modularize function to filter colormap

* Modularize function for generating colors for lidarseg classes

* Clean up example usages

* Allow user to filter and show desired lidarseg labels for render_sample_data

* Save figure from render_pointcloud_in_image without excessive border

* Render scene with pointclouds and lidarseg labels

* Shift conditional statements in render_sample_data()

* Check that lidar pointcloud is a keyframe

* Render all camera channels with pointclouds and lidarseg labels for all scenes

* Allow user to show lidarseg labels for render_sample()

* Hacks to render for VOs

* Added scene to filename for easy retrieval for VOs

* Render scenes for VOs per class

* Updated rendering of scenes for cameras to output videos

* Function to convert plt scatter plot to cv2

* Tidy up examples

* Print stats for lidarseg keyframe

* Enable sorting of counts

* Tweak verbosity for render_sample

* Include class index when printing sample stats

* Assert that path for video given by user ends with .avi

* Enable showing of legend when mapping pc to image

* Style edits

* nuScenes-lidarseg initial tutorial

* Cleanup, move methods out of tutorial, copyright

* Cleanup

* Remove empty cell

* Throw errors rather than warnings

* Address some comments on style.

* Add full stops to comments

* Tidy up utils

* Removed TODOs

* Initial draft of lidarseg tutorial

* Added gifs for tutorial

* Tidy up tutorial

* Add setup instructions to tutorial

* Edit style for one assertion

* Changed show_lidarseg_labels default value in render_sample

* Change type to np.ndarray

* Fix some typos

* Remove version from lidarseg.json

* Bugfix

* Unbugfix

* Don't add category_lidarseg to table_names

* Comment out dangerous commands

Co-authored-by: Holger Caesar <holger@nutonomy.com>

* View predictions using lidarseg devkit (#396)

* Allow user to view preds using render_pointcloud_in_image

* Allow users to get stats of predictions for a sample

* Add visualization for user's preds to relevant functions

* Updated tutorial

* Fixed some typos in tutorial

* Comment out code which may crash notebook

* Fix double #

* Get lidarseg file name from json instead, default coloring to depth if empty json

* Change show_lidarseg_preds to lidarseg_preds_bin_path

* Amend doc string for lidarseg_preds_bin_path

* Amend render video functions to take folder name for preds

* Amend tutorial

* Remove gifs from folder

* Clear outputs from tutorial

* Update documentations for lidarseg (#404)

* Update docs for lidarseg

* Fix some typos

* Address comments for docs

* Update folder structure description in notebook

* Automatically determine which lidarseg classes are present in a pointcloud projected onto an image (#410)

* Auto find liarseg labels present in projected pcl

* Add assertion to check num of bin files equals num of lidarseg records

* Address comments

* Adjust legend for aesthetics

* Improve aesthetics for render_scene_with_pointclouds_for_all_cameras (#412)

* Flip back cams horizontally for aesthetics

* Explicitly set margins to zero and turn axes off

* Prevent final frames from showing up in notebook if user stops render

* Allow render_scene_with_pointclouds_for_all_cameras to output frames as images

* Update indices in tutorial examples

* Style changes

* Clear memory in render_camera_channel_with_pointclouds if user stops rendering

* Graceful exit if users stops rendering

* Change random seed for color scheme

* Style edits

* Enable bboxes to be plotted with lidarseg (#418)

* Include rendering of lidarseg with bboxes for render_scene_with_pointclouds_for_all_cameras

* Add option to render bboxes in lidarseg videos

* Style edits

* Rename methods

* Renamed argument to be more similar to nuScenes

* Rename show_lidarseg_labels to show_lidarseg

* Added comment on videos and images

* Rewording

* Making plotting of bboxes with lidarseg more modular

* Remove option to save renders as images for render_scene_lidarseg

* Use render_mode instead of render_if_no_points

* Style edits

* Update notebook

* Style edits for notebook

* Remove render_if_no_points for render_pointcloud_in_image

* Style edits

* Update file names for renders

* Save image even for verbose=False

* Made plot_points_and_bboxes

* Update coloring

* Update colormap

* Choose nicer colors for some classes

* Deep copy for colormap

* Reduce bbox line width and update colormap

* One line per arg

* Update colormap

* Remove unused method for arbitrary colormap

* Remove leading 0 for filename

* One colormap to rule them all

* Update error msg for checking colormap to colors conversion

* Add dpi as an arg for rendering lidarseg scenes

* Amend docstring in get_colormap()

* Make colormap DRY

Co-authored-by: Holger Caesar <holger@nutonomy.com>

* Remove render_if_no_points argument from lidarseg tutorial (#423)

* Remove render_if_no_points arg and section

* Rephrasing

* Single category.json for both nuScenes and nuScenes-lidarseg (#424)

* Change 'label' to 'name' when loading category json; check if preds folder exists

* Check in various functions that  lidarseg is installed

* Shorten assertion statement

* Check that new version of category.json is used, if loading lidarseg

* Check only for 'index' in category records

* nutonomy green for driveable surface (#425)

* Improve lidarseg rendering methods, update tutorial, add unit tests (#429)

* Initial commit to address comments

* Fix bug where passing np.array instead of array into filter_lidarseg_labels throws an error

* Rephrase some parts of tutorial

* Include legend for render_sample_data

* Add arg for show_lidarseg_legend in render_sample_data

* Adjust aesthetics of legend in render_sample_data

* Update colormap

* Make creating legend modular

* Ensure only labels in pc is included in legend

* Explain verbose

* Check that filter_lidarseg_labels is either a list or np.array

* Allow user to list stats by class index

* Update docstrings

* Update tutorial to demo sort_by

* Add unit tests

* Make painting of labels modular

* Add docstrings

* Add lidarseg annotation instructions, update lidarseg tutorial intro (#431)

* Initial commit

* Add links from classes to examples

* Link from examples to class definitions using Top

* Remove extra white line

* Include script to render lidarseg histogram for split (#433)

* Initial commit

* Arrange order of seaborn in requirements.txt

* Add docstrings

* Add more docstrings

* Remove seaborn from requirements

* Address comments

* Add horizontal gridlines

* Replicate look and feel of seaborn without needing package

* Style edits

* Add type in truncate_class_name

Co-authored-by: Holger Caesar <holger@nutonomy.com>

* Align indices in nuImages and nuScenes-lidarseg (#440)

* Order colormap

* Do not assume any ordering in colormap

* nuImages v1.0 (#435)

* nuImages v0.1 (#372)

* Implemented lazy loading

* Implement rendering of images with annotations

* Disentangle requirements, better rendering

* Explicitly init tables

* Fix bug with explicitly assigned tables

* Fix instance mask parsing

* Fixed rendering

* Remove unnecessary matplotlib commands

* Remove explorer

* Implemented list_attributes

* list_cameras() and better lazy loading

* Implement list_category

* Improve tutorial, implement missing functions

* Fix case when mask is empty, fix attribute names, change color map

* Overhauled tutorial

* Cleanup

* Split readme into nuImages and nuScenes parts

* Typo

* Address review comments, add split to version

* Render another image

* nuImages new schema (#427)

* Increment version number to v1.0

* Update table names

* Installation instructions

* Improve tutorial

* Replace with global colormap, update tutorial

* Patch all methods for new schema

* Add detailed attribute tests

* Added test for complete attributes

* Add tests for foreign keys

* Added schema figures

* Add missing requirement

* Skip long unit tests

* Manual test alternative

* Fix test issues and setup for all versions

* Completed tests

* Better error message

* Show sample_token instead

* Readd init shortcut for tutorial

* Sort schema alphabetically

* Add new table from nuScenes-lidarseg

* Rename image to sample

* Auto-format

* Add new method list_sample_content

* Add missing ego vehicle color

* Remove nbr_points field from lidarseg table

* Bugfix, select only the jpg

* Typo

* Modify render_image to take sample_data tokens

* Start integrating depth methods

* Rework depth rendering

* Rename function

* Fix one more

* Don't init sample_to_key_frame_map

* Fix lambda expression

* Fix inconsistent naming

* Fix indexing bug

* Select all lidar tokens

* Fix lidar extension

* Add depth rendering to tutorial

* Rework rendering of next image

* Return projected points as 2d, set tutorial to val split, spelling

* Workaround for wrong intrinsics format

* Fix projection bug

* Standardize out_path options

* Remove intrinsics workaround, align tutorials, add schema to tutorial

* Remove deprecated nuim.sample_to_key_frame method

* Typo

* Use get_sample_content, simplify table loading

* Add test for prev/next pointers

* Fix typo in driveable

* Adjust test for new database fields

* Fix sorting bug

* Change output string

* Output message

* Add script to render random images to disk

* Fix bug, improve output messages

* Fix wrong path, remove title for better visibility, disable lazy

* Debug numerical overflow, improve render script, filter by camera, align render_depth output size

* Avoid overflow and remove special cases

* Simplify depth completion options

* Cleanup

* Implemented pointcloud rendering

* Color by height

* Adjust distort settings, add shortcut function

* Reorganize and add method for trajectory

* Fancy rendering method for trajectories

* Cleanup

* Rename tutorials, remove the word basic

* Move all docs to docs folder and rename some

* Throw error if sweeps does not exist

* Revamp tutorial

* Add arguments in render, fix bugs

* Use typing throughout, improve documentation

* Improve installation instructions

* Auto-clean and updated comments

* More cleanup

* Address some review comments

* Rename variable

* Remove deprecated output argument

* Add new functions to tutorial

* FIX CRITICAL BUG IN MASK_DECODE, Add list_anns, dynamic bow_line_width, change rendering order, improve tutorial

* nuImages videos (#432)

* Fix automatic test discover and disable test outputs

* Add nuImages schema

* Add function to render videos

* Unified image and video rendering

* Further improvements to render script

* Fix bug around unchanged mode

* Check number of sample_datas

* Add new function to tutorial

* Add sort order argument to list_categories

* Fix sorting

* Garbage collection to avoid memory overflow

* Format

* Reorganize render_images, add render_rare_classes

* Rename has_rider attribute in test

* Minor fix for tests on test set

* Replace val with mini

* Address review comments

* Fix merge conflicts, remove warnings, change rendered car

* Fix test for mini

* Remove todo

* Added filtering options to various scripts

* Updated render_depth method (v2) (#437)

* Separate methods for dense and sparse depth

* More parameters for dense depth

* Print velocity, fix fisheye

* Workaround when no key_lidar_token is known

* Fix previous modification

* Filter by fileformat

* Handle missing lidar

* Same for depth_sparse

* Update nuImages documentation, add ability to get segmentation masks, improve listing and rendering functions (#438)

* Shift section of lazy loading

* Remove sentence in README saying that files will not be available for public download

* Inform user in tutorial that with_category must be True in order to set We can also set with_attributes=True

* Inform user in tutorial that with_category must be True in order to set with_attributes=True

* Section on setup before section on verifying install in installation.md

* Shift lazy description back and just state at intro that it will be discussed later

* Improve docstring and add assertion for render_image, allow list_attributes to be sorted

* Change  to  in list_cameras

* Change cones and barriers from black to gray in tutorial

* Change foreground to object and background to surface

* Allow user to choose to draw only boxes, surfaces, both, or all in render_image

* Update tutorial to mention with_annotations options in render_image

* Give user abiity to adjust font size in render_image

* Add get_segmentation method

* Classes are 1-indexed in masks

* Address comments for nuimages and util

* Address comments for tutorial

* Tidy up tutorial

* More content for instructions, improve structure of instructions

* Address comments

* Address some comments for instructions

* Address more comments on instructions

* Tidy up instructions a bit

* Edits from proofreading

* Re-introduce fonts_valid param in get_font

* Align class indices in lidarseg and get_semgentation

* Remove class descriptions, add attributes to instructions

* Only include added attributes

* Add link to attributes in instructions

* Tidy up somemore

* Proofread

* Proofread again

* Proofread more

* Address more comments on instructions

* Do not assume category.json is sorted when creating index map in get_segmentation

* Add links for images for instructions

* Fix list_anns

* Add samples images for surface classes

* Remove images

* Add links to images

* Address comments for phrasing

* Added empty line before last image

* Attempt to resolve requirements issue

* Revert

* Copy entire requirements folder

* Fix wrong folder

Co-authored-by: Holger Caesar <holger@nutonomy.com>

* Resolve some warnings, adapt to new render function arguments

* Add log name to the output path

* Use future ego pose

* Debug output

* Don't do ego compensation

* Remove lidar code in nuImages (#442)

* Purge depth from code

* Update documentation

* Simplify variable names

* Wording

* Wording

* Fix wrong assertion

* Typo

* Install pycocotools from conda-forge (#441)

* install pycocotools from conda-forge

* less verbose output

* Minor corrections

* Address review comments

Co-authored-by: whyekit-aptiv <62535720+whyekit-aptiv@users.noreply.github.com>
Co-authored-by: Valentyn Klindukh <valentyn.klindukh@nutonomy.com>

* nuScenes 2.0 tutorial revamp (#446)

* Add list_sample_data

* Fix indexing bug

* Fix type

* Fix columns

* Type conversion

* Type conversion

* Fix bug with different length of values and bins

* By default, use standard font size

* Workaround for mini split

* Fix swapped dimensions

* Add nuim.list_sample_data_histogram()

* More verbose

* Make matplotlib style attributes case-sensitive

* Fix rendering issue in map expansion

* Change figure alignment in prediction tutorial

* nuImages data export (#447)

* Add boilerplate

* Add export script

* Print status

* Typo

* Add mini split to all

* Rename files

* Skip only individual files

* Fix file paths in mini

* Remove redundant path

* Add assert_download for nuImages

* Clarification on past sweeps

* VERY SLOW RENDERING with alpa composite

* Simplify alpha composite

* Cleanup

* Update links and silence test

* Update more links

* Update FAQ

* Fix requirements for new pip package (#449)

* nuScenes v2.0 alpha (#452)

* Fix broken link in readme

* aummy typo

* Add a note on bicycle rcks

* Fix render_rare_classes, additional checks for object_tokens in render_image

* Fix tutorials link

* Address comments from nuScenes-lidarseg alpha testing (#451)

* Fix broken link in readme

* aummy typo

* Add a note on bicycle rcks

* Fix render_rare_classes, additional checks for object_tokens in render_image

* Amend classes shown in tutorial

* Add note for MacOSX users

* Shift matplotlib macOSX tip to installaton.md instead

* Updated backend instructions

Co-authored-by: Holger Caesar <holger@nutonomy.com>

* Update requirements

* Address review comments

* Address review comments

* Address review comments

Co-authored-by: whyekit-aptiv <62535720+whyekit-aptiv@users.noreply.github.com>
Co-authored-by: Valentyn Klindukh <valentyn.klindukh@nutonomy.com>
  • Loading branch information
3 people authored Sep 1, 2020
1 parent 274725a commit 9b492f7
Show file tree
Hide file tree
Showing 54 changed files with 4,850 additions and 470 deletions.
110 changes: 80 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,27 @@
# nuScenes devkit
Welcome to the devkit of the [nuScenes](https://www.nuscenes.org) dataset.
Welcome to the devkit of the [nuScenes](https://www.nuscenes.org/nuscenes) and [nuImages](https://www.nuscenes.org/nuimages) datasets.
![](https://www.nuscenes.org/public/images/road.jpg)

## Overview
- [Changelog](#changelog)
- [Dataset download](#dataset-download)
- [Map expansion](#map-expansion)
- [Devkit setup](#devkit-setup)
- [Getting started](#getting-started)
- [nuImages](#nuimages)
- [nuImages setup](#nuimages-setup)
- [Getting started with nuImages](#getting-started-with-nuimages)
- [nuScenes](#nuscenes)
- [nuScenes setup](#nuscenes-setup)
- [nuScenes-lidarseg](#nuscenes-lidarseg)
- [Prediction challenge](#prediction-challenge)
- [CAN bus expansion](#can-bus-expansion)
- [Map expansion](#map-expansion)
- [Getting started with nuScenes](#getting-started-with-nuscenes)
- [Known issues](#known-issues)
- [Citation](#citation)

## Changelog
- Aug. 31, 2020: Devkit v1.1.0: nuImages v1.0 and nuScenes-lidarseg v1.0 code release.
- Jul. 7, 2020: Devkit v1.0.9: Misc updates on map and prediction code.
- Apr. 30, 2020: nuImages v0.1 code release.
- Apr. 1, 2020: Devkit v1.0.8: Relax pip requirements and reorganize prediction code.
- Mar. 24, 2020: Devkit v1.0.7: nuScenes prediction challenge code released.
- Feb. 12, 2020: Devkit v1.0.6: CAN bus expansion released.
Expand All @@ -26,7 +35,49 @@ Welcome to the devkit of the [nuScenes](https://www.nuscenes.org) dataset.
- Oct. 4, 2018: Code to parse RADAR data released.
- Sep. 12, 2018: Devkit for teaser dataset released.

## Dataset download
## Devkit setup
We use a common devkit for nuScenes and nuImages.
The devkit is tested for Python 3.6 and Python 3.7.
To install Python, please check [here](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/installation.md#install-python).

Our devkit is available and can be installed via [pip](https://pip.pypa.io/en/stable/installing/) :
```
pip install nuscenes-devkit
```
For an advanced installation, see [installation](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/installation.md) for detailed instructions.

## nuImages
nuImages is a stand-alone large-scale image dataset.
It uses the same sensor setup as the 3d nuScenes dataset.
The structure is similar to nuScenes and both use the same devkit, which make the installation process simple.

### nuImages setup
To download nuImages you need to go to the [Download page](https://www.nuscenes.org/download),
create an account and agree to the nuScenes [Terms of Use](https://www.nuscenes.org/terms-of-use).
For the devkit to work you will need to download *at least the metadata and samples*, the *sweeps* are optional.
Please unpack the archives to the `/data/sets/nuimages` folder \*without\* overwriting folders that occur in multiple archives.
Eventually you should have the following folder structure:
```
/data/sets/nuimages
samples - Sensor data for keyframes (annotated images).
sweeps - Sensor data for intermediate frames (unannotated images).
v1.0-* - JSON tables that include all the meta data and annotations. Each split (train, val, test, mini) is provided in a separate folder.
```
If you want to use another folder, specify the `dataroot` parameter of the NuImages class (see tutorial).

### Getting started with nuImages

Please follow these steps to make yourself familiar with the nuImages dataset:
- Get the [nuscenes-devkit code](https://github.com/nutonomy/nuscenes-devkit).
- Run the tutorial using:
```
jupyter notebook $HOME/nuscenes-devkit/python-sdk/tutorials/nuimages_tutorial.ipynb
```
- See the [database schema](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/schema_nuimages.md) and [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/instructions_nuimages.md).

## nuScenes

### nuScenes setup
To download nuScenes you need to go to the [Download page](https://www.nuscenes.org/download),
create an account and agree to the nuScenes [Terms of Use](https://www.nuscenes.org/terms-of-use).
After logging in you will see multiple archives.
Expand All @@ -42,59 +93,58 @@ Eventually you should have the following folder structure:
```
If you want to use another folder, specify the `dataroot` parameter of the NuScenes class (see tutorial).

## Prediction Challenge
### nuScenes-lidarseg
In August 2020 we published [nuScenes-lidarseg](https://www.nuscenes.org/nuscenes#lidarseg) which contains the semantic labels of the point clouds for the approximately 40,000 keyframes in nuScenes.
To install nuScenes-lidarseg, please follow these steps:
- Download the dataset from the [Download page](https://www.nuscenes.org/download),
- Extract the `lidarseg` and `v1.0-*` folders to your nuScenes root directory (e.g. `/data/sets/nuscenes/lidarseg`, `/data/sets/nuscenes/v1.0-*`).
- Get the latest version of the nuscenes-devkit.
- If you already have a previous version of the devkit, update the pip requirements (see [details](https://github.com/nutonomy/nuscenes-devkit/blob/master/setup/installation.md)): `pip install -r setup/requirements.txt`
- Get started with the [tutorial](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/tutorials/nuscenes_lidarseg_tutorial.ipynb).

### Prediction challenge
In March 2020 we released code for the nuScenes prediction challenge.
To get started:
- Download the version 1.2 of the map expansion (see below).
- Download the trajectory sets for [CoverNet](https://arxiv.org/abs/1911.10298) from [here](https://www.nuscenes.org/public/nuscenes-prediction-challenge-trajectory-sets.zip).
- Go through the [prediction tutorial](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/tutorials/prediction_tutorial.ipynb).
- For information on how submissions will be scored, visit the challenge [website](https://www.nuscenes.org/prediction).

## CAN bus expansion
### CAN bus expansion
In February 2020 we published the CAN bus expansion.
It contains low-level vehicle data about the vehicle route, IMU, pose, steering angle feedback, battery, brakes, gear position, signals, wheel speeds, throttle, torque, solar sensors, odometry and more.
To install this expansion, please follow these steps:
- Download the expansion from the [Download page](https://www.nuscenes.org/download),
- Move the can_bus folder to your nuScenes root directory (e.g. `/data/sets/nuscenes/can_bus`).
- Extract the can_bus folder to your nuScenes root directory (e.g. `/data/sets/nuscenes/can_bus`).
- Get the latest version of the nuscenes-devkit.
- If you already have a previous version of the devkit, update the pip requirements (see [details](https://github.com/nutonomy/nuscenes-devkit/blob/master/setup/installation.md)): `pip install -r setup/requirements.txt`
- If you already have a previous version of the devkit, update the pip requirements (see [details](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/installation.md)): `pip install -r setup/requirements.txt`
- Get started with the [CAN bus readme](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/can_bus/README.md) or [tutorial](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/tutorials/can_bus_tutorial.ipynb).

## Map expansion
### Map expansion
In July 2019 we published a map expansion with 11 semantic layers (crosswalk, sidewalk, traffic lights, stop lines, lanes, etc.).
To install this expansion, please follow these steps:
- Download the expansion from the [Download page](https://www.nuscenes.org/download),
- Move the .json files to your nuScenes `maps` folder.
- Extract the .json files to your nuScenes `maps` folder.
- Get the latest version of the nuscenes-devkit.
- If you already have a previous version of the devkit, update the pip requirements (see [details](https://github.com/nutonomy/nuscenes-devkit/blob/master/setup/installation.md)): `pip install -r setup/requirements.txt`
- If you already have a previous version of the devkit, update the pip requirements (see [details](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/installation.md)): `pip install -r setup/requirements.txt`
- Get started with the [map expansion tutorial](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/tutorials/map_expansion_tutorial.ipynb).

## Devkit setup
The devkit is tested for Python 3.6 and Python 3.7.
To install Python, please check [here](https://github.com/nutonomy/nuscenes-devkit/blob/master/setup/installation.md#install-python).

Our devkit is available and can be installed via [pip](https://pip.pypa.io/en/stable/installing/) :
```
pip install nuscenes-devkit
```
For an advanced installation, see [installation](https://github.com/nutonomy/nuscenes-devkit/blob/master/setup/installation.md) for detailed instructions.

## Getting started
### Getting started with nuScenes
Please follow these steps to make yourself familiar with the nuScenes dataset:
- Read the [dataset description](https://www.nuscenes.org/overview).
- [Explore](https://www.nuscenes.org/explore/scene-0011/0) the lidar viewer and videos.
- Read the [dataset description](https://www.nuscenes.org/nuscenes#overview).
- [Explore](https://www.nuscenes.org/nuscenes#explore) the lidar viewer and videos.
- [Download](https://www.nuscenes.org/download) the dataset.
- Get the [nuscenes-devkit code](https://github.com/nutonomy/nuscenes-devkit).
- Read the [online tutorial](https://www.nuscenes.org/tutorial) or run it yourself using:
- Read the [online tutorial](https://www.nuscenes.org/nuscenes#tutorials) or run it yourself using:
```
jupyter notebook $HOME/nuscenes-devkit/python-sdk/tutorials/nuscenes_basics_tutorial.ipynb
jupyter notebook $HOME/nuscenes-devkit/python-sdk/tutorials/nuscenes_tutorial.ipynb
```
- Read the [nuScenes paper](https://www.nuscenes.org/publications) for a detailed analysis of the dataset.
- Run the [map expansion tutorial](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/tutorials/map_expansion_tutorial.ipynb).
- Take a look at the [experimental scripts](https://github.com/nutonomy/nuscenes-devkit/tree/master/python-sdk/nuscenes/scripts).
- For instructions related to the object detection task (results format, classes and evaluation metrics), please refer to [this readme](https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/eval/detection/README.md).
- See the [database schema](https://github.com/nutonomy/nuscenes-devkit/blob/master/schema.md) and [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/instructions.md).
- See the [FAQs](https://github.com/nutonomy/nuscenes-devkit/blob/master/faqs.md).
- See the [database schema](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/schema_nuscenes.md) and [annotator instructions](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/instructions_nuscenes.md).
- See the [FAQs](https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/faqs.md).

## Known issues
Great care has been taken to collate the nuScenes dataset and many users have praised the quality of the data and annotations.
Expand All @@ -109,7 +159,7 @@ However, some minor issues remain:
- A small number of 3d bounding boxes is annotated despite the object being temporarily occluded. For this reason we make sure to **filter objects without lidar or radar points** in the nuScenes benchmarks. See [issue 366](https://github.com/nutonomy/nuscenes-devkit/issues/366).

## Citation
Please use the following citation when referencing [nuScenes](https://arxiv.org/abs/1903.11027):
Please use the following citation when referencing [nuScenes or nuImages](https://arxiv.org/abs/1903.11027):
```
@article{nuscenes2019,
title={nuScenes: A multimodal dataset for autonomous driving},
Expand Down
9 changes: 3 additions & 6 deletions faqs.md → docs/faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,14 @@ On this page we try to answer questions frequently asked by our users.
- For issues and bugs *with the devkit*, file an issue on [Github](https://github.com/nutonomy/nuscenes-devkit/issues).
- For any other questions, please post in the [nuScenes user forum](https://forum.nuscenes.org/).

- Can I use nuScenes for free?
- For non-commercial use [nuScenes is free](https://www.nuscenes.org/terms-of-use), e.g. for educational use and some research use.
- Can I use nuScenes and nuImages for free?
- For non-commercial use [nuScenes and nuImages are free](https://www.nuscenes.org/terms-of-use), e.g. for educational use and some research use.
- For commercial use please contact [nuScenes@nuTonomy.com](mailto:nuScenes@nuTonomy.com). To allow startups to use our dataset, we adjust the pricing terms to the use case and company size.

- How can I participate in the nuScenes challenges?
- See the overview site for the [object detection challenge](https://www.nuscenes.org/object-detection).
- See the overview site for the [tracking challenge](https://www.nuscenes.org/tracking).

- What's next for nuScenes?
- Raw IMU & GPS data.
- Object detection, tracking and other challenges (see above).
- See the overview site for the [prediction challenge](https://www.nuscenes.org/prediction).

- How can I get more information on the sensors used?
- Read the [Data collection](https://www.nuscenes.org/data-collection) page.
Expand Down
38 changes: 28 additions & 10 deletions setup/installation.md → docs/installation.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
# Advanced Installation
We provide step-by-step instructions to install our devkit.
We provide step-by-step instructions to install our devkit. These instructions apply to both nuScenes and nuImages.
- [Download](#download)
- [Install Python](#install-python)
- [Setup a Conda environment](#setup-a-conda-environment)
- [Setup a virtualenvwrapper environment](#setup-a-virtualenvwrapper-environment)
- [Setup PYTHONPATH](#setup-pythonpath)
- [Install required packages](#install-required-packages)
- [Setup environment variable](#setup-environment-variable)
- [Setup Matplotlib backend](#setup-matplotlib-backend)
- [Verify install](#verify-install)
- [Setup NUSCENES environment variable](#setup-nuscenes-environment-variable)

## Download

Expand Down Expand Up @@ -36,7 +37,7 @@ An alternative to Conda is to use virtualenvwrapper, as described [below](#setup
See the [official Miniconda page](https://conda.io/en/latest/miniconda.html).

#### Setup a Conda environment
We create a new Conda environment named `nuscenes`.
We create a new Conda environment named `nuscenes`. We will use this environment for both nuScenes and nuImages.
```
conda create --name nuscenes python=3.7
```
Expand Down Expand Up @@ -103,16 +104,33 @@ To install the required packages, run the following command in your favourite vi
```
pip install -r setup/requirements.txt
```
**Note:** The requirements file is internally divided into base requirements (`base`) and requirements specific to certain products or challenges (`nuimages`, `prediction` and `tracking`). If you only plan to use a subset of the codebase, feel free to comment out the lines that you do not need.

## Verify install
To verify your environment run `python -m unittest` in the `python-sdk` folder.
You can also run `assert_download.py` in the `nuscenes/scripts` folder.

## Setup NUSCENES environment variable
## Setup environment variable
Finally, if you want to run the unit tests you need to point the devkit to the `nuscenes` folder on your disk.
Set the NUSCENES environment variable to point to your data folder, e.g. `/data/sets/nuscenes`:
Set the NUSCENES environment variable to point to your data folder:
```
export NUSCENES="/data/sets/nuscenes"
```
or for NUIMAGES:
```
export NUIMAGES="/data/sets/nuimages"
```

## Setup Matplotlib backend
When using Matplotlib, it is generally recommended to define the backend used for rendering:
1) Under Ubuntu the default backend `Agg` results in any plot not being rendered by default. This does not apply inside Jupyter notebooks.
2) Under MacOSX a call to `plt.plot()` may fail with the following error (see [here](https://github.com/matplotlib/matplotlib/issues/13414) for more details):
```
libc++abi.dylib: terminating with uncaught exception of type NSException
```
To set the backend, add the following to your `~/.matplotlib/matplotlibrc` file, which needs to be created if it does not exist yet:
```
backend: TKAgg
```
## Verify install
To verify your environment run `python -m unittest` in the `python-sdk` folder.
You can also run `assert_download.py` in the `python-sdk/nuscenes/tests` and `python-sdk/nuimages/tests` folders to verify that all files are in the right place.
That's it you should be good to go!
That's it you should be good to go!
Loading

0 comments on commit 9b492f7

Please sign in to comment.