cd ~/nautilus/third_party/
git clone https://github.com/pointonenav/jetson-inference/
Prerequisites:
sudo apt install libgstreamer-plugins-base1.0-dev
sudo apt install libgstreamer1.0-dev
sudo apt install libglew-dev
Installation for tensorRT on Ubuntu x86_64:
Download the appropriate library for your OS:
Ubuntu 16.04
Ubuntu 18.04
sudo dpkg -i nv-tensorrt-repo-ubuntu1604-cuda10.1-trt5.1.5.0-ga-20190427_1-1_amd64.deb
OR
sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda10.1-trt5.1.5.0-ga-20190427_1-1_amd64.deb
sudo apt-key add /var/nv-tensorrt-repo-cuda10.1-trt5.1.5.0-ga-20190427/7fa2af80.pub
sudo apt-get update
sudo apt-get install tensorrt
For more info: https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-515/tensorrt-install-guide/index.html#installing-debian
If tensorrt fails try installing this:
wget https://developer.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda-repo-ubuntu1804-10-1-local-10.1.168-418.67_1.0-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1804-10-1-local-10.1.168-418.67_1.0-1_amd64.deb
Next, re-run:
sudo apt-get install tensorrt
sudo apt-get install cuda-10-1
sudo ln -s /usr/lib/x86_64-linux-gnu/glib-2.0/include/glibconfig.h /usr/include/glib-2.0/
cd jetson-inference
git submodule update --init
sudo apt-get install libqt4-dev
cmake .
make -j4
cd data
./download_model.sh
There are two options to choose from to setup the required software on AGX Xavier for CV:
-
The setup script
setup_orb.sh
, which is located in the tools directory of this repo. The downside of this method is that it can take a very long time for cloning multiple AGXs, as all of the downloading and compilation must be done locally on the AGX. -
The Xavier cloning procedure, which is described below. While it does take significant time to make a backup of the "master" AGX, clones can flashed very quickly compared with the above method.
The host machine should be running Ubuntu 16.04 64-bit and have at least 56 GB of free disk space.
There must be one "master" AGX Xavier, with any desired configurations and compiled files on it. The "master" will be backed up to a network drive, which will then be used to make clones. The master must be on the same network as the host machine.
Install NFS on the host machine with the following command:
sudo apt-get install nfs-kernel-server
Install the Nvidia Jetpack SDK for AGX Xavier
Make a directory that NFS will share. We will name the directory nfs-share
and place it at the root of the filesystem, but the name and location are arbitrary and long as they remain consistent.
sudo mkdir /nfs-share
Open /etc/exports
in a text editor, and modify it as follows. Here, using vim:
sudo vim /etc/exports
Add the following line to make the directory accessible by all IP addresses on the network:
/nfs-share *(rw,sync,no_root_squash,no_subtree_check)
Alternatively, if security is a concern, /etc/exports
can be modified to only give read and write privileges to certain IP addresses. In our case, the IP address of each AGX Xavier must be known and each should be added on a new line as follows (with the example IP addresses 192.168.1.100 and 192.168.1.200):
/nfs-share 192.168.1.100(rw,sync,no_root_squash,no_subtree_check)
/nfs-share 192.168.1.200(rw,sync,no_root_squash,no_subtree_check)
When all changes are done being made to /etc/exports
, issue the following command:
sudo systemctl restart nfs-kernel-server
This will implement the new NFS rules.
Make sure there are no firewalls enabled on the host by issuing the following command:
sudo ufw status
If ufw is installed, this command should return Status: inactive
, indicating there are no active firewalls. Otherwise, disable ufw:
sudo ufw disable
All of the commands in this section should be executed on the "master" AGX Xavier.
Issue the following to install client-side NFS:
sudo apt-get install nfs-common
sudo mkdir /nfs
Create a mount point on the Xavier. Replace host_ip
with the host's IP address on the network.
sudo mount host_ip:/nfs-share /nfs
Run df -h
and there should be a line resembling the following:
Filesystem Size Used Avail Use% Mounted on
host_ip:/nfs-share XXXG XXG XXXG X% /nfs
In addition, you should be able to create a test file in the /nfs
directory and have that file appear on the host.
sudo touch /nfs/testfile
If testfile
appears on the host in /nfs-share/
then the configuration was successful. Delete this file after a successful configuration is confirmed.
Using rsync, we clone the master AGX Xavier to the network drive (/nfs-share
on the host)
sudo rsync -aAXv / --exclude={"/dev/*","/nfs/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /nfs/
The command-line arguments -aAXv
will ensure that all symbolic links, devices, permissions, ownerships, modification times, ACLs, and extended attributes are preserved. Importantly, we must exclude the directory /nfs
, or else rsync will run in an infinite loop, since this is the directory we are writing the AGX's filesystem to.
This process may take a long time.
To boot off the NFS, put the master AGX Xavier into recovery mode. See the Jetson AGX Xavier User Guide for instructions on how to do this. Once the AGX is in recovery mode and plugged into the host machine via the front USB-C port on the AGX, issue the following commands on the host machine:
cd $JETPACK_ROOT/JetPack_4.2_Linux_P2888/Linux_for_Tegra/
sudo ./flash.sh -N <ip-addr-of-linux-host>:/nfs-share --rcm-boot jetson-xavier eth0
This will make the master AGX boot off of the network.
Now that the AGX is booted off of the AGX and not the eMMC, the rootfs is the NFS. Therefore, the actual eMMC is frozen, so we can run the following commands to generate a backup image:
sudo umount /dev/mmcblk0p1
sudo dd if=/dev/mmcblk0p1 of=/rootfs.img
The dd command may take a long time to execute, since it is creating an image of the entire root filesystem. Since we are booted off the NFS, when the dd command is finished, rootfs.img should appear as /nfs-share/rootfs.img
on the host.
Verify that rootfs.img exists on the host, then shutdown the master AGX.
On the host, issue the following:
cd $JETPACK_ROOT/JetPack_4.2_Linux_P2888/Linux_for_Tegra/bootloader/
sudo ./mksparse -v --fillpattern=0 /nfs-share/rootfs.img system.img
Place the AGX Xavier you would like to clone into recovery mode, and plug it into the host via the front USB-C port on the AGX. Then, issue the following:
cd $JETPACK_ROOT/JetPack_4.2_Linux_P2888/Linux_for_Tegra/
sudo ./flash.sh -r jetson-xavier mmcblk0p1
This will flash the AGX Xavier, creating a clone of the master AGX.
Repeat the instructions in this section to make as many clones as desired.
Welcome to our training guide for inference and deep vision runtime library for NVIDIA DIGITS and Jetson Xavier/TX1/TX2.
This repo uses NVIDIA TensorRT for efficiently deploying neural networks onto the embedded platform, improving performance and power efficiency using graph optimizations, kernel fusion, and half-precision FP16 on the Jetson.
Vision primitives, such as imageNet
for image recognition, detectNet
for object localization, and segNet
for semantic segmentation, inherit from the shared tensorNet
object. Examples are provided for streaming from live camera feed and processing images from disk. See the Deep Vision API Reference Specification for accompanying documentation.
There are multiple tracks of the tutorial that you can choose to follow, including Training + Inference or Inference-Only.
> Jetson Nano Developer Kit and JetPack 4.2 is now supported in the repo.
> See our technical blog including benchmarks,Jetson Nano Brings AI Computing to Everyone
.
If you would like to only do the inference portion of the tutorial, which can be run on your Jetson in roughly two hours, these modules are available below:
- Setting up Jetson with JetPack
- Building the Repo from Source
- Classifying Images with ImageNet
- Locating Object Coordinates using DetectNet
The full tutorial includes training and inference, and can take roughly two days or more depending on system setup, downloading the datasets, and the training speed of your GPU.
- DIGITS Workflow
- DIGITS System Setup
- Setting up Jetson with JetPack
- Building the Repo from Source
- Classifying Images with ImageNet
- Using the Console Program on Jetson
- Coding Your Own Image Recognition Program
- Running the Live Camera Recognition Demo
- Re-Training the Network with DIGITS
- Downloading Image Recognition Dataset
- Customizing the Object Classes
- Importing Classification Dataset into DIGITS
- Creating Image Classification Model with DIGITS
- Testing Classification Model in DIGITS
- Downloading Model Snapshot to Jetson
- Loading Custom Models on Jetson
- Locating Object Coordinates using DetectNet
- Detection Data Formatting in DIGITS
- Downloading the Detection Dataset
- Importing the Detection Dataset into DIGITS
- Creating DetectNet Model with DIGITS
- Testing DetectNet Model Inference in DIGITS
- Downloading the Detection Model to Jetson
- DetectNet Patches for TensorRT
- Detecting Objects from the Command Line
- Multi-class Object Detection Models
- Running the Live Camera Detection Demo on Jetson
- Semantic Segmentation with SegNet
In this area, links and resources for deep learning developers are listed:
- Appendix
- ros_deep_learning - TensorRT inference ROS nodes
- NVIDIA AI IoT - NVIDIA Jetson GitHub repositories
- Jetson eLinux Wiki - Jetson eLinux Wiki
Training GPU: Maxwell, Pascal, Volta, or Turing-based GPU (ideally with at least 6GB video memory)
optionally, AWS P2/P3 instance or Microsoft Azure N-series
Ubuntu 14.04 x86_64 or Ubuntu 16.04 x86_64.
Deployment: Jetson Xavier Developer Kit with JetPack 4.0 or newer (Ubuntu 18.04 aarch64).
Jetson TX2 Developer Kit with JetPack 3.0 or newer (Ubuntu 16.04 aarch64).
Jetson TX1 Developer Kit with JetPack 2.3 or newer (Ubuntu 16.04 aarch64).
note: this branch is verified against the following BSP versions for Jetson AGX Xavier and Jetson TX1/TX2:
> Jetson Nano - JetPack 4.2 / L4T R32.1 aarch64 (Ubuntu 18.04 LTS) inc. TensorRT 5.0
> Jetson AGX Xavier - JetPack 4.2 / L4T R32.1 aarch64 (Ubuntu 18.04 LTS) inc. TensorRT 5.0
> Jetson AGX Xavier - JetPack 4.1.1 DP / L4T R31.1 aarch64 (Ubuntu 18.04 LTS) inc. TensorRT 5.0 GA
> Jetson AGX Xavier - JetPack 4.1 DP EA / L4T R31.0.2 aarch64 (Ubuntu 18.04 LTS) inc. TensorRT 5.0 RC
> Jetson AGX Xavier - JetPack 4.0 DP EA / L4T R31.0.1 aarch64 (Ubuntu 18.04 LTS) inc. TensorRT 5.0 RC
> Jetson TX2 - JetPack 4.2 / L4T R32.1 aarch64 (Ubuntu 18.04 LTS) inc. TensorRT 5.0
> Jetson TX2 - JetPack 3.3 / L4T R28.2.1 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 4.0
> Jetson TX1 - JetPack 3.3 / L4T R28.2 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 4.0
> Jetson TX2 - JetPack 3.2 / L4T R28.2 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 3.0
> Jetson TX2 - JetPack 3.1 / L4T R28.1 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 3.0 RC
> Jetson TX1 - JetPack 3.1 / L4T R28.1 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 3.0 RC
> Jetson TX2 - JetPack 3.1 / L4T R28.1 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 2.1
> Jetson TX1 - JetPack 3.1 / L4T R28.1 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 2.1
> Jetson TX2 - JetPack 3.0 / L4T R27.1 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 1.0
> Jetson TX1 - JetPack 2.3 / L4T R24.2 aarch64 (Ubuntu 16.04 LTS) inc. TensorRT 1.0
> Jetson TX1 - JetPack 2.3.1 / L4T R24.2.1 aarch64 (Ubuntu 16.04 LTS)
Note that TensorRT samples from the repo are intended for deployment onboard Jetson, however when cuDNN and TensorRT have been installed on the host side, the TensorRT samples in the repo can be compiled for PC.
Since the documentation has been re-organized, below are links mapping the previous content to the new locations.
(click on the arrow above to hide this section)See DIGITS Workflow
See DIGITS Setup
See JetPack Setup
See DIGITS Setup
See DIGITS Setup
See DIGITS Setup
See DIGITS Setup
See DIGITS Setup
See DIGITS Setup
See DIGITS Setup
See Building the Repo from Source
See Building the Repo from Source
See Building the Repo from Source
See Building the Repo from Source
See Building the Repo from Source
See Classifying Images with ImageNet
See Classifying Images with ImageNet
See Running the Live Camera Recognition Demo
See Re-Training the Recognition Network
See Re-Training the Recognition Network
See Re-Training the Recognition Network
See Re-Training the Recognition Network
See Re-Training the Recognition Network
See Re-Training the Recognition Network
See Downloading Model Snapshots to Jetson
See Loading Custom Models on Jetson
See Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
See Downloading the Detection Model to Jetson
See Downloading the Detection Model to Jetson
See Detecting Objects from the Command Line
See Detecting Objects from the Command Line
See Detecting Objects from the Command Line
See Detecting Objects from the Command Line
See Detecting Objects from the Command Line
See Detecting Objects from the Command Line
See Running the Live Camera Detection Demo
See Semantic Segmentation with SegNet
See Semantic Segmentation with SegNet
See Semantic Segmentation with SegNet
See Generating Pretrained FCN-Alexnet
See Training FCN-Alexnet with DIGITS
See Training FCN-Alexnet with DIGITS
© 2016-2019 NVIDIA | Table of Contents