This source code has been developped to allow python and these libraries communicate with Unity Engine. This use case is using Ultralytics's YoloV8 and is able to send position information to unity in order to create interactions and animations with it.
- Unity
2022.3.13f1
or later - Python
3.8
or later - Make sure you have a webcam or a video file to test the project
- This installation guide has been tested on Linux and MacOS, but it should work on Windows too with some modifications
Clone this public repo with wherever you want
git clone https://github.com/mathis-lambert/python-yolo-wrapper-for-unity-interactions yolo_with_unity
cd yolo_with_unity
It's not a docker project for now, but it should be later...
init the project with these commands : this will create a virtual env and install the dependencies
cd python
python3 -m venv venv
source venv/bin/activate # On windows : venv\Scripts\activate
make install # make is not available on windows, run install.bat instead (not tested)
Open the project with Unity 2022.3.13f1
or later. You can find the project in the folder UnityProject/
.
- Click on the ADD button in the Unity Hub and select the folder
UnityProject/
- Select the project and click on the OPEN button
Open a terminal and run these commands :
This will run the script src/pywui/main.py
with the --help option to show you the available options
cd python
source venv/bin/activate
pywui --help
Use your webcam
pywui --source 0
Show the webcam output
pywui --source 0 --show
Use video file
pywui --source path/to/video.mp4
NOTE: pywui
only runs the script main.py, if you want to run a script from the scripts/
folder, you have to use python
instead of pywui
python scripts/the_script_you_want.py
Open a terminal and run these commands :
cd python
source venv/bin/activate # On windows : venv\Scripts\activate
pywui-test
The use case is simple, the python script will send the position of the detected object to Unity. Then Unity will move the object to the position sent by python.
Run the command :
pywui --model ./models/yolov8s-pose.pt --detect-method track --source 0 --conf 0.7 --filter
- Open the scene
Assets/Scenes/Main.unity
- Click on the play button
- You should see as many articulation groups as people detected by the python script and the articulations should move to the position of the people detected
- Open the scene
Assets/Scenes/Articulation.unity
- Click on the play button
- You should see as many articulation groups as people detected by the python script and the articulations should move to the position of the people detected
python3
might not work, usepython
instead- If you have an error with
pywui
command, try to runsource venv/bin/activate
again and thenpywui
should work - If you have an error with
pywui-test
command, try to runsource venv/bin/activate
again and thenpywui-test
should work
If you want to collaborate on this open source repo, you're free to do so.
- Fork the repo
- Create your own branch
- Develop your feature
- And with a PR describe the purpose of your feature