We are working under the Perceptions and Robotics Group (PRG) at UMD, to use OpenAI's ChatGPT for applications in robotics. We are working on creating a high level function library which can be controlled by ChatGPT and can then be used to undertake several complex tasks which would otherwise require human intervention. Currently an ongoing project.
- Note : Add git lfs tracking in the .gitattributes file or via the terminal if python files are bigger than 50MB
- Note : Use this google drive link to download the blender_data folder to avoid lfs
- 2D bounding Box of objects from Blender
2.93
- Integrate IMU with blender
- Integrate LiDAR/SONAR with blender
- Train yolo on the generated data from blender
- Rover position data with detections on PCL
- colab notebook used to train the yolov4-tiny, find it here
- Modified the colab notebook provided here
- We trained a yoloV4-tiny on a dataset of around 5000 images
- Download the model best weights file from here
- Copy the model weights in here
The api library functions are written in chat_script/func.py
file
- get_bot_position() - Returns position of the robot in the form of a tuple containing x,y,z coordinates called points.
- get_position(obj_name) - Returns position in the form of points of any object whose name is passed to the function.
- set_bot_motion(points) - Moves the robot to those set of points at a certain time in the future.
- set_yaw(angle) - Sets the yaw angle for the bot.
- set_pitch(angle) - Sets the pitch angle for the bot.
- set_roll(angle) - Sets the roll angle for the bot.
- Download blender and python. Currently works on Blender 3.4.1 and Python 3.11.4.
- Download this and then replace the blender_data folder in the /data directory with the blender_data folder on google drive. Do the same with the .blend file and /code/underwater_scene_for_aerial_image. Note that the blender_data is not necessary to run the simulation but is necessary when adding models to the simulation.
- Create a blender terminal to display cli output and errors like this. You can also create a desktop shortcut on mac if you would like.
- Once blender is started go to File>Open and open the .blend file in the project.
Each version of blender supports a corresponding version of python. Look here for more info.
Go to Edit>Preferences>Paths and then go to the data subsection where you will see a scripts path. Make sure that corresponds with your working directory.
Blender searches through a list of paths when looking for modules to laod which you can find with import sys
andprint(sys.path)
. You can either place your modules on those paths or use
spec = importlib.util.spec_from_file_location("Simulate", "/Users/aadipalnitkar/Underwater-share/code/Simulate.py")
simulate = importlib.util.module_from_spec(spec)
sys.modules["Simulate"] = Simulate
spec.loader.exec_module(simulate)
You must restart blender to let the edits outside of blender take effect.