From 3c270ff43b48212be22d84c55aef261565fc6ba4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Metin=20=C3=87elik?= Date: Mon, 27 Nov 2023 12:14:11 +0100 Subject: [PATCH 1/5] Fix some typos --- README.md | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index fe712354..c4399c7f 100644 --- a/README.md +++ b/README.md @@ -31,11 +31,16 @@ pip install godot-rl 2. Download one, or more of [examples](https://github.com/edbeeching/godot_rl_agents_examples), such as BallChase, JumperHard, FlyBy. ```bash -gdrl.env_from_hub -r edbeeching/godot_rl_JumperHard +gdrl.env_from_hub -r edbeeching/godot_rl_JumperHard ``` -You may need to example run permissions on the game executable. `chmod +x examples/godot_rl_JumperHard/bin/JumperHard.x86_64` -3. Train and visualize +You may need to add run permissions on the game executable. + +```bash +chmod +x examples/godot_rl_JumperHard/bin/JumperHard.x86_64 +``` + +3. Train and visualize ```bash gdrl --env=gdrl --env_path=examples/godot_rl_JumperHard/bin/JumperHard.x86_64 --experiment_name=Experiment_01 --viz @@ -47,23 +52,25 @@ You can also train an agent in the Godot editor, without the need to export the 1. Download the Godot 4 Game Engine from [https://godotengine.org/](https://godotengine.org/) 2. Open the engine and import the JumperHard example in `examples/godot_rl_JumperHard` -3. Start in editor training with: `gdrl` +3. Start in editor training with: `gdrl` ### Creating a custom environment There is a dedicated tutorial on creating custom environments [here](docs/CUSTOM_ENV.md). We recommend following this tutorial before trying to create your own environment. -If you face any issues getting started, please reach out on our discord or raise a github issue. +If you face any issues getting started, please reach out on our discord or raise a GitHub issue. ### Exporting and loading your trained agent in onnx format: + The latest version of the library provides experimental support for onnx models with the Stable Baselines 3 and rllib training frameworks. -1. First run train you agent using the sb3 example on the [github repo](https://github.com/edbeeching/godot_rl_agents/blob/main/examples/stable_baselines3_example.py), enabling the option `--onnx_export_path=GameModel.onnx` + +1. First run train you agent using the sb3 example on the [GitHub repo](https://github.com/edbeeching/godot_rl_agents/blob/main/examples/stable_baselines3_example.py), enabling the option `--onnx_export_path=GameModel.onnx` 2. Then, using the **mono version** of the Godot Editor, add the onnx model path to the sync node. If you do not seen this option you may need to download the plugin from [source](https://github.com/edbeeching/godot_rl_agents_plugin) 3. The game should now load and run using the onnx model. If you are having issues building the project, ensure that the contents of the `.csproj` and `.sln` files in you project match that those of the plugin [source](https://github.com/edbeeching/godot_rl_agents_plugin). ## Advanced usage -[https://user-images.githubusercontent.com/7275864/209160117-cd95fa6b-67a0-40af-9d89-ea324b301795.mp4](https://user-images.githubusercontent.com/7275864/209160117-cd95fa6b-67a0-40af-9d89-ea324b301795.mp4) +[https://user-images.githubusercontent.com/7275864/209160117-cd95fa6b-67a0-40af-9d89-ea324b301795.mp4](https://user-images.githubusercontent.com/7275864/209160117-cd95fa6b-67a0-40af-9d89-ea324b301795.mp4) Please ensure you have successfully completed the quickstart guide before following this section. @@ -79,9 +86,11 @@ Godot RL Agents supports 4 different RL training frameworks, the links below det ### Why have we developed Godot RL Agents? The objectives of the framework are to: + - Provide a free and open source tool for Deep RL research and game development. - Enable game creators to imbue their non-player characters with unique behaviors. - Allow for automated gameplay testing through interaction with an RL agent. + ### How can I contribute to Godot RL Agents? Please try it out, find bugs and either raise an issue or if you fix them yourself, submit a pull request. @@ -92,13 +101,13 @@ This should now be working, let us know if you have any issues. ### Can you help with my game project? -If the README and docs here not provide enough information, reach out to us on github and we may be able to provide some advice. +If the README and docs here not provide enough information, reach out to us on [Discord](https://discord.gg/HMMD2J8SxY) or GitHub and we may be able to provide some advice. ### How similar is this tool to Unity ML agents? We are inspired by the the Unity ML agents toolkit and aims to be a more compact, concise and hackable codebase, with little abstraction. -# Licence +# License Godot RL Agents is MIT licensed. See the [LICENSE file](https://www.notion.so/huggingface2/LICENSE) for details. From 1a8d3d1d17d5357beb73a54beb2fd963ecc7d994 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Metin=20=C3=87elik?= Date: Mon, 27 Nov 2023 12:15:19 +0100 Subject: [PATCH 2/5] Mention venv as alternative --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index c4399c7f..404f1b09 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,9 @@ This quickstart guide will get you up and running using the Godot RL Agents libr ### Installation and first training -1. Install the Godot RL Agents library: (if you are new to python, pip and conda, read this [guide](https://www.machinelearningplus.com/deployment/conda-create-environment-and-everything-you-need-to-know-to-manage-conda-virtual-environment/)) +1. Install the Godot RL Agents library. If you are new to Python or not using a virtual environment, it's highly recommended to create one using [venv](https://docs.python.org/3/library/venv.html) or [Conda](https://www.machinelearningplus.com/deployment/conda-create-environment-and-everything-you-need-to-know-to-manage-conda-virtual-environment/) to isolate your project dependencies. + +Once you have set up your virtual environment, proceed with the installation: ```bash pip install godot-rl From 14f48c0c0b5add8a7225cceeece0910b80eebefc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Metin=20=C3=87elik?= Date: Mon, 27 Nov 2023 12:16:41 +0100 Subject: [PATCH 3/5] Don't use numbered list for installation chapter --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 404f1b09..54e811c1 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ This quickstart guide will get you up and running using the Godot RL Agents libr ### Installation and first training -1. Install the Godot RL Agents library. If you are new to Python or not using a virtual environment, it's highly recommended to create one using [venv](https://docs.python.org/3/library/venv.html) or [Conda](https://www.machinelearningplus.com/deployment/conda-create-environment-and-everything-you-need-to-know-to-manage-conda-virtual-environment/) to isolate your project dependencies. +Install the Godot RL Agents library. If you are new to Python or not using a virtual environment, it's highly recommended to create one using [venv](https://docs.python.org/3/library/venv.html) or [Conda](https://www.machinelearningplus.com/deployment/conda-create-environment-and-everything-you-need-to-know-to-manage-conda-virtual-environment/) to isolate your project dependencies. Once you have set up your virtual environment, proceed with the installation: @@ -30,7 +30,7 @@ Once you have set up your virtual environment, proceed with the installation: pip install godot-rl ``` -2. Download one, or more of [examples](https://github.com/edbeeching/godot_rl_agents_examples), such as BallChase, JumperHard, FlyBy. +Download one, or more of [examples](https://github.com/edbeeching/godot_rl_agents_examples), such as BallChase, JumperHard, FlyBy. ```bash gdrl.env_from_hub -r edbeeching/godot_rl_JumperHard @@ -42,7 +42,7 @@ You may need to add run permissions on the game executable. chmod +x examples/godot_rl_JumperHard/bin/JumperHard.x86_64 ``` -3. Train and visualize +Train and visualize ```bash gdrl --env=gdrl --env_path=examples/godot_rl_JumperHard/bin/JumperHard.x86_64 --experiment_name=Experiment_01 --viz From 5f3235c2599011cfc931c51f1dcbd9239a865aba Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Metin=20=C3=87elik?= Date: Mon, 27 Nov 2023 12:23:12 +0100 Subject: [PATCH 4/5] Use default port --- docs/TRAINING_STATISTICS.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/docs/TRAINING_STATISTICS.md b/docs/TRAINING_STATISTICS.md index 17ecc767..88e2c97a 100644 --- a/docs/TRAINING_STATISTICS.md +++ b/docs/TRAINING_STATISTICS.md @@ -3,13 +3,15 @@ Godot RL Agents uses [Tensorboard](https://www.tensorflow.org/tensorboard) to log training statistics. You can start Tensorboard by running the following command: ```bash -tensorboard --logdir ./logs/[RL_FRAMEWORK] -p 7000 +tensorboard --logdir ./logs/[RL_FRAMEWORK] ``` + where `[RL_FRAMEWORK]` is one of `sb3`, `sf`, `cleanrl` or `rllib`, depending which RL framework you are using. -To view the training statistics visit [http://localhost:7000](http://localhost:7000) in your browser. +To view the training statistics visit [http://localhost:6006](http://localhost:6006) in your browser. + +You can specify a different log directory and experiment name during traing with the `--experiment_dir` and `--experiment_name` option. e.g. -You can specify a different log directory and experiment name during traing with the `--experiment_dir` and `--experiment_name` option. e.g. -``` bash +```bash gdrl --trainer=sf --env=gdrl --env_path=examples/godot_rl_/bin/.x86_64 --experiment_name=MyExperiment_01 --experiment_dir=logs/MyDir -``` \ No newline at end of file +``` From f937b9b131f2eb14adfde10714ffbe6c3cfe82bf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Metin=20=C3=87elik?= Date: Mon, 27 Nov 2023 12:28:51 +0100 Subject: [PATCH 5/5] Add Discord URL --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 54e811c1..7ecfd696 100644 --- a/README.md +++ b/README.md @@ -60,7 +60,7 @@ You can also train an agent in the Godot editor, without the need to export the There is a dedicated tutorial on creating custom environments [here](docs/CUSTOM_ENV.md). We recommend following this tutorial before trying to create your own environment. -If you face any issues getting started, please reach out on our discord or raise a GitHub issue. +If you face any issues getting started, please reach out on our [Discord](https://discord.gg/HMMD2J8SxY) or raise a GitHub issue. ### Exporting and loading your trained agent in onnx format: