- yaml config file, remove globals
- move all utils to my_utils
- the agent should have all the conditionals and control the vehicle when in a mission. Should have the while True loop?
- rename github project from Deteccion_conos to
um_driverless
- find todos and fix them
- check delays between simulator and processed image, response time
- knowing the pickling error, try to visualize to a thread
- TODO
- UM-Driverless Contents
- INSTALL SIMULATOR
- Notes
- NVIDIA JETSON XAVIER NX SETUP
- KVASER Setup in Ubuntu
- conda
- Cliente para realizar la detección de conos en el simulador
- Old stuff
We won't use Conda since it's not necessary, and the several python versions have caused problems. Also conda can't install all the packages we need, so there would be some packages installed with pip and others with conda. It also caused problems with docker.
-
First apt installs
sudo apt update && sudo apt upgrade -y #; spd-say "I finished the update" sudo apt install curl nano git pip python3 zstd #zstd is zed dependency pip install --upgrade pip; #spd-say "Finished the installs"
-
Clone the GitHub directory:
git clone https://github.com/UM-Driverless/Deteccion_conos.git
-
Install the requirements (for yolo network and for our scripts)
cd ~/Deteccion_conos pip install -r {requirements_file_name}.txt #yolo_requirements.txt requirements.txt
-
[OPTIONAL] If you want to modify the weights, include the weights folder in:
"yolov5/weights/yolov5_models"
-
ZED Camera Installation.
- Download the SDK according to desired CUDA version and system (Ubuntu, Nvidia jetson xavier jetpack, ...). If it doesn't find the matching CUDA version of the SDK, it will install it. When detected, it will continue with the installation.
- Add permits:
sudo chmod 777 {FILENAME}
- Run it without sudo (You can copy the file and Ctrl+Shift+V into the terminal. Don't know why tab doesn't complete the filename):
sh {FILENAME}.run
- By default accept to install cuda, static version of SDK, AI module, samples and Python API. Diagnostic not required.
- Now it should be installed in the deault installation path:
/usr/local/zed
- To get the Python API (Otherwise pyzed won't be installed and will throw an error):
python3 /usr/local/zed/get_python_api.py
-
To make sure you are using the GPU (Get IS CUDA AVAILABLE? : True)
- Check what GPU driver you should install: https://www.nvidia.co.uk/Download/index.aspx?lang=en-uk
- Check what GPU driver you have. X.Org -> nvidia-driver-515. In Software and Updates.
- If errors, reinstall the driver from scratch:
sudo apt-get remove --purge nvidia-* -y sudo apt autoremove sudo ubuntu-drivers autoinstall sudo service lightdm restart sudo apt install nvidia-driver-525 nvidia-dkms-525 sudo reboot
-
To check all cuda versions installed
dpkg -l | grep -i cuda
-
You can the cuda version compatible with the graphics driver using
nvidia-smi
or the built-in app in xavier o orin modules. -
To check the cuda version of the installed compiler, use
/usr/local/cuda/bin/nvcc --version
-
Now pytorch should use the same CUDA version as the ZED camera. Check this: https://www.stereolabs.com/docs/pytorch/
-
You should be able to run:
python3 main.py
- To explore if something fails:
sudo apt-get install python3-tk
- To install cuda manually: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_network
Go to https://github.com/FS-Driverless/Formula-Student-Driverless-Simulator/releases and download the latest version. This is an executable file that will run the simulator. It can be stored and run from anywhere. To connect to the Python code, clone the repo in the same folder as Deteccion_conos. Here you can see Python examples.
Test program
# This code adds the fsds package to the pyhthon path.
# It assumes the fsds repo is cloned in the home directory.
# Replace fsds_lib_path with a path to wherever the python directory is located.
import sys, os
# fsds_lib_path = os.path.join(os.path.expanduser("~"), "Formula-Student-Driverless-Simulator", "python")
fsds_lib_path = os.path.join(os.getcwd(),"python")
print('CARPETA:',fsds_lib_path)
sys.path.insert(0, fsds_lib_path)
import time
import fsds
# connect to the AirSim simulator
client = fsds.FSDSClient()
# Check network connection
client.confirmConnection()
# After enabling api controll only the api can controll the car.
# Direct keyboard and joystick into the simulator are disabled.
# If you want to still be able to drive with the keyboard while also
# controll the car using the api, call client.enableApiControl(False)
client.enableApiControl(True)
# Instruct the car to go full-speed forward
car_controls = fsds.CarControls()
car_controls.throttle = 1
client.setCarControls(car_controls)
time.sleep(5)
# Places the vehicle back at it's original position
client.reset()
To use, first run the fsds-... file, click "Run simulation", then run the python code
- To use CAN comm with the Nvidia Jetson Orin, the can bus has to be working properly and connected when the Orin turns on. There has to be at least another device to acknowledge messages.
TODO Testing with Jetpack 5.1
-
Start here to install the OS: https://developer.nvidia.com/embedded/learn/get-started-jetson-xavier-nx-devkit
- Takes about 1h.
- Prepare SD card with >=32GB, a way to connect it to a computer (sd to usb adapter), fast internet,
- First download the Jetson Xavier NX Developer Kit SD Card Image. Older versions here.
- JetPack 5.1 is the latest version. JetPack 5.0.2 is the latest with docker pytorch installation available, and it's the one we've used.
- JetPack 4.5.1 works with Pytorch 1.8 according to https://cognitivexr.at/blog/2021/03/11/installing-pytorch-and-yolov5-on-an-nvidia-jetson-xavier-nx.html
- Then you'll be asked to install "SD Card formatter" and "Etcher"
- Follow the tutorial for the rest
-
Set power mode (up right in task bar) to max
-
First apt installs
sudo apt update && sudo apt upgrade -y sudo apt install curl nano git zstd #zstd is zed dependency
-
Clone the GitHub directory:
git clone https://github.com/UM-Driverless/Deteccion_conos.git; #spd-say "Done cloning the repository"
-
To make bluetooth work link:
- Navigate to the following file: $ sudo vim /lib/systemd/system/bluetooth.service.d/nv-bluetooth-service.conf
- Search for below line: ExecStart=/usr/lib/bluetooth/bluetoothd -d --noplugin=audio,a2dp,avrcp
- Remove all options for no plugin. It should look like below: ExecStart=/usr/# KVASER Setup in Ubuntu
-
Follow the tutorial: https://cognitivexr.at/blog/2021/03/11/installing-pytorch-and-yolov5-on-an-nvidia-jetson-xavier-nx.html
- Script that automatically installs everything
curl https://raw.githubusercontent.com/cognitivexr/edge-node/main/scripts/setup-xavier.sh | bash
- It will have Jetpack 4.5.1, Pytorch 1.8, TensorRT 7.1.3, Cuda 10.2
- To solve
Illegal instruction (core dumped)
, issue with numpy and openblas:pip3 install -U "numpy==1.19.4"
- Script that automatically installs everything
-
Install ZED camera drivers
- ZED SDK for L4T 35.1 (Jetpack 5.0)
- (https://www.stereolabs.com/developers/release/)
- Python API
- (Test Record: https://github.com/SusanaPineda/utils_zed/blob/master/capture_loop.py)
- If shared library error:
sudo apt install libturbojpeg0-dev
-
Startup script (Setup all the programs on startup)
- Add in Startup Applications: "python3 startup_script.py"
-
(CAN: https://medium.com/@ramin.nabati/enabling-can-on-nvidia-jetson-xavier-developer-kit-aaaa3c4d99c9)
-
To use:
- First plug power, then the HDMI port, because otherwise it doesn't turn on
- Don't use the upper left USB-A port for high speed (ZED camera). It's 2.0 while the others are 3.1
- Reference: https://www.kvaser.com/linux-drivers-and-sdk/
- Video: https://www.youtube.com/watch?v=Gz-lIVIU7ys
- SDK: https://www.kvaser.com/downloads-kvaser/?utm_source=software&utm_ean=7330130980754&utm_status=latest
tar -xvzf linuxcan.tar.gz
sudo apt-get install build-essential
sudo apt-get install linux-headers-`uname -r`
In linuxcan, and linuxcan/canlib, run:
make
sudo make install
In linuxcan/common, run:
make
sudo ./installscript.sh
To have the python API:
pip3 install canlib
To DEBUG:
make KV_Debug_ON=1
- https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html
- Create a conda environment:
conda create -n formula -y # To remove it: # conda env remove -n formula
- Activate the environment
conda activate formula
- Update the compiler
conda install -c conda-forge gcc=12.1.0 # Otherwise zed library throws error: version `GLIBCXX_3.4.30' not found
Este cliente funciona en conjunto con el simulador desarrollado en https://github.com/AlbaranezJavier/UnityTrainerPy. Para hacerlo funcionar solo será necesario seguir las instrucciones del repositorio indicado para arrancar el simulador y posteriormente ejecutar el cliente que podemos encontrar en el archivo /PyUMotorsport/main_cone_detection.py
Los pesos de la red neuronal para el main.py se encuentran en el siguiente enlace: https://drive.google.com/file/d/1H-KOYKMu6KM3g8ENCnYPSPTvb6zVnnFX/view?usp=sharing Se debe descomprimir el archivo dentro de la carpeta: /PyUMotorsport/cone_detection/saved_models/
Los pesos de la red neuronal para el main_2.py se encuentran en el siguiente enlace: https://drive.google.com/file/d/1NFDBKxpRcfPs8PV3oftLya_M9GxW8O5h/view?usp=sharing Se debe descomprimir el archivo dentro de la carpeta: /PyUMotorsport_v2/ObjectDetectionSegmentation/DetectionData/
Go to canlib/examples
./listChannels
./canmonitor 0
make
sudo ./installscript.sh
Crea tu entorno virtual en python 3.8 y activalo
conda create -n formula python=3.8
conda activate formula
#conda install tensorflow-gpu
A continuación vamos a installar el Model Zoo de detección de Tensorflow
Si no tienes todavía la carpeta models/research/
git clone --depth 1 https://github.com/tensorflow/models
Una vez dispones de la carpeta models/research/
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .
Actualizar Xavier para ejecutar YOLOv5 (06/2022)
git clone https://github.com/UM-Driverless/Deteccion_conos.git
cd Deteccion_conos
pip3 install -r yolov5/yolo_requeriments.txt
sh can_scripts/enable_CAN.sh
python3 car_actuator_testing_zed_conect_yolo.py
- Try to use a preconfigured JetPack 5.0.2 PyTorch Docker container, with all the dependencies and versiones solved: https://blog.roboflow.com/deploy-yolov5-to-jetson-nx/
- Register in docker website
- Login. If it doesn't work, reboot and try again.
docker login
- Take the tag of a container from here: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch . For example, for JetPack 5.0.2 (L4T R35.1.0) it's
l4t-pytorch:r35.1.0-pth1.13-py3
- Pull container
# l4t-pytorch:r35.1.0-pth1.13-py3 -> sudo docker pull nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3
- Run container
# Will download about 10GB of stuff sudo docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3
- TODO FINISH
(Install visual studio, pycharm, telegram, ...)