From 454e791a0667c3c060e8ec264b180bfd6fb211b1 Mon Sep 17 00:00:00 2001 From: Stefanos Laskaridis Date: Fri, 19 Jul 2024 17:09:00 +0100 Subject: [PATCH] Update README links --- README.md | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index 45f7a97..b53a9f9 100644 --- a/README.md +++ b/README.md @@ -36,39 +36,39 @@ This command will checkout the latest working version for each component, recurs The general workflow for running experiment goes as follows: -1. Go to `frameworks/MLC/mlc-llm` or `frameworks/llama.cpp/llama.cpp` and compile each framework. Please see the documentation ([#1](frameworks/llama.cpp/llama.cpp/build_scripts/README.md),[#2](frameworks/MLC/mlc-llm/build_scripts/README.md)) for more. -2. Go to `src/models` and download, convert models. Please see [this](src/models/README.md) for more. -3. After you build the models, you need to build the apps, that are going to be installed to the phones. To do so, please follow the rest of the documentation in ([#1](frameworks/llama.cpp/llama.cpp/build_scripts/README.md),[#2](frameworks/MLC/mlc-llm/build_scripts/README.md)). -4. Go to `blade/experiments/` and follow the [documentation](blade/experiments/README.md) there. You need to install the applications, transfer models on the local directories and then run the automated scripts. +1. Go to `frameworks/MLC/mlc-llm` or `frameworks/llama.cpp/llama.cpp` and compile each framework. Please see the documentation ([#1](https://github.com/brave-experiments/llama.cpp-public/blob/main/build_scripts/README.md),[#2](https://github.com/brave-experiments/mlc-llm-public/blob/main/build_scripts/README.md)) for more. +2. Go to `src/models` and download, convert models. Please see [this](https://github.com/brave-experiments/MELT-public/blob/main/src/models/README.md) for more. +3. After you build the models, you need to build the apps, that are going to be installed to the phones. To do so, please follow the rest of the documentation in ([#1](https://github.com/brave-experiments/llama.cpp-public/blob/main/build_scripts/README.md),[#2](https://github.com/brave-experiments/mlc-llm-public/blob/main/build_scripts/README.md)). +4. Go to `blade/experiments/` and follow the [documentation](https://github.com/brave-experiments/blade-public/blob/main/README.md) there. You need to install the applications, transfer models on the local directories and then run the automated scripts. 5. If the experiment has successfully run, you'll have `blade/experiment_outputs/` directory populated. You can run the `blade/experiments/notebooks` for analysis of the results. -For running on jetson platform, you need to build each framework with the appropriate script (see ([#1](frameworks/llama.cpp/llama.cpp/build_scripts/README.md),[#2](frameworks/MLC/mlc-llm/build_scripts/README.md)). See also this [documentation](jetsonlab/README.md) for more. +For running on jetson platform, you need to build each framework with the appropriate script (see ([#1]([#1](https://github.com/brave-experiments/llama.cpp-public/blob/main/build_scripts/README.md),[#2](https://github.com/brave-experiments/mlc-llm-public/blob/main/build_scripts/README.md)). See also this [documentation](https://github.com/brave-experiments/jetsonlab-public/blob/main/README.md) for more. ### Further documentation Additional documentation on how to run is provided in each of the subdirectories, as separate README files. -* PhoneLab [README](blade/experiments/README.md) -* JetsonLab [README](jetsonlab/README.md) +* PhoneLab [README](https://github.com/brave-experiments/blade-public/blob/main/README.md) +* JetsonLab [README](https://github.com/brave-experiments/jetsonlab-public/blob/main/README.md) * llama.cpp: - * building [README](frameworks/llama.cpp/llama.cpp/build_scripts/README.md) - * running [README](frameworks/llama.cpp/llama.cpp/run_scripts/README.md) + * building [README](https://github.com/brave-experiments/llama.cpp-public/blob/main/build_scripts/README.md) + * running [README](https://github.com/brave-experiments/llama.cpp-public/blob/main/run_scripts/README.md) * MLC-LLM: - * building [README](frameworks/MLC/mlc-llm/build_scripts/README.md) - * running [README](frameworks/MLC/mlc-llm/run_scripts/README.md) -* LLMFarm [README](frameworks/llama.cpp/LLMFarmEval/README.md) + * building [README](https://github.com/brave-experiments/mlc-llm-public/blob/main/build_scripts/README.md) + * running [README](https://github.com/brave-experiments/mlc-llm-public/blob/main/run_scripts/README.md) +* LLMFarm [README](https://github.com/brave-experiments/LLMFarmEval-public/blob/main/README.md) ## Supported frameworks -* MLC-LLM [submodule](https://github.com/brave-experiments/mlc-llm), [upstream repo](https://github.com/mlc-ai/mlc-llm) - * TVM-Unity [submodule](https://github.com/brave-experiments/mlc-llm), [upstream repo](https://github.com/mlc-ai/relax.git) -* llama.cpp [submodule](https://github.com/brave-experiments/llama.cpp), [upstream](https://github.com/ggerganov/llama.cpp) - * LLMFarm [submodule](https://github.com/brave-experiments/llmfarmeval), [upstream](https://github.com/guinmoon/LLMFarm) +* MLC-LLM [submodule](https://github.com/brave-experiments/mlc-llm-public), [upstream repo](https://github.com/mlc-ai/mlc-llm) + * TVM-Unity [submodule](https://github.com/brave-experiments/tvm-public), [upstream repo](https://github.com/mlc-ai/relax.git) +* llama.cpp [submodule](https://github.com/brave-experiments/llama.cpp-public), [upstream](https://github.com/ggerganov/llama.cpp) + * LLMFarm [submodule](https://github.com/brave-experiments/llmfarmeval-public), [upstream](https://github.com/guinmoon/LLMFarm) ## Supported infrastructure backends -* [JetsonLab](https://github.com/brave-experiments/jetsonlab) -* [PhoneLab](https://github.com/brave-experiments/blade) +* [JetsonLab](https://github.com/brave-experiments/jetsonlab-public) +* [PhoneLab](https://github.com/brave-experiments/blade-public) ## Authors/Maintainers