This project gives a large language model (LLM) control of a Linux machine.
In the example below, we start with the prompt:
You now have control of an Ubuntu Linux server. Your goal is to run a Minecraft server. Do not respond with any judgement, questions or explanations. You will give commands and I will respond with current terminal output.
Respond with a linux command to give to the server.
The AI first does a sudo apt-get update, then installs openjdk-8-jre-headless. Each time it runs a command we return the result of this command back to OpenAI and ask for a summary of what happened, then use this summary as part of the next prompt.
Inspired by xkcd.com/350 and Optimality is the tiger, agents are its teeth
docker network create aquarium
docker build -t aquarium .
go build
Pass your prompt in the form of a goal. For example, --goal "Your goal is to run a minecraft server."
Using OpenAI:
OPENAI_API_KEY=$OPENAI_API_KEY ./aquarium --goal "Your goal is to run a Minecraft server."
Using a local model provided by llama-cpp-python:
./aquarium --goal "Your goal is to run a Minecraft server." --url "http://localhost:8000" --context-mode full
arguments
./aquarium -h
Usage of ./aquarium:
-context-mode string
How much context from the previous command do we give the AI? This is used by the AI to determine what to run next.
- partial: We send the last 10 lines of the terminal output to the AI. (cheap, accurate)
- full: We send the entire terminal output to the AI. (expensive, very accurate)
(default "partial")
-debug
Enable logging of AI prompts to debug.log
-goal string
Goal to give the AI. This will be injected within the following statement:
> You now have control of an Ubuntu Linux server.
> [YOUR GOAL WILL BE INSERTED HERE]
> Do not respond with any judgement, questions or explanations. You will give commands and I will respond with current terminal output.
>
> Respond with a linux command to give to the server.
(default "Your goal is to run a Minecraft server.")
-limit int
Maximum number of commands the AI should run. (default 30)
-model string
OpenAI model to use. Ignored if --url is provided. See https://platform.openai.com/docs/models (default "gpt-3.5-turbo")
-preserve-container
Persist docker container after program completes.
-url string
URL to locally hosted endpoint. If provided, this supersedes the --model flag.
The left side of the screen contains general information about the state of the program. The right side contains the terminal, as seen by the AI.
These are written to aquarium.log and terminal.log.
Calls to the AI are not logged unless you add the --debug
flag. API requests and responses will be appended to debug.log.
- Send the OpenAI api the list of commands (and their outcomes) executed so far, asking it what command should run next
- Execute command in docker VM
- Read output of previous command- send this to OpenAI and ask gpt-3.5-turbo for a summary of what happened
- If the output was too long, OpenAI api will return a 400
- Recursively break down the output into chunks, ask it for a summary of each chunk
- Ask OpenAI for a summary-of-summaries to get a final answer about what this command did
Prompt: Your goal is to execute a verbose port scan of amazon.com.
The bot replies with nmap -v amazon.com. nmap is not installed; we return the failure to the AI, which then installs it and continues.
portscan.mp4
Prompt: Your goal is to install a ngircd server.
(an IRC server software)
Installs the software, helpfully allows port 6667 through the firewall, then tries to run sudo -i and gets stuck.
- There's no success criteria- the program doesn't know when to stop. The flag
-limit
controls how many commands are run (default 30) - The AI cannot give input to running programs. For example, if you ask it to SSH into a server using a password, it will hang at the password prompt. For
apt-get
, i've hacked around this issue by injecting-y
to prevent asking the user for input. - The terminal output handling is imperfect. Some commands, like wget, use \r to write the progress bar... I rewrite that as a \n instead. I also don't have any support for terminal colors, which i'm suppressing with
ansi2txt