SPEAR is an advanced AI Agent platform designed to support multiple runtime environments. It provides flexibility and scalability for running AI agent workloads in various configurations. SPEAR is currently in development, with ongoing features and improvements.
Features | Support | Status |
Runtime Support | Process | ✅ Supported |
Docker Container | ✅ Supported | |
WebAssembly | ⏳ Work in Progress | |
Kubernetes | ⏳ Work in Progress | |
Operating Modes | Local Mode | ✅ Supported |
Cluster Mode | ⏳ Work in Progress | |
Deployment | Auto Deployment | ⏳ Work in Progress |
Agent Service | Planning | ⏳ Work in Progress |
Memory | ||
Tools |
-
Runtime Support:
- Process
- Docker Container
- Future Support: WebAssembly and Kubernetes (K8s)
-
Operating Modes:
- Local Mode: Run a single AI agent workload on a local machine.
- Cluster Mode: Designed to support AI agent workloads across multiple clusters. (Not yet implemented)
-
Deployment:
- Auto deployment: Auto Generate configuration files based on programming code.
-
Agent Service:
- Planning: Offer some agent planning technology enhancing agent ability.
- Memory: Provide some memory services to manage the knowledge of the agent.
- Tools: Provide the user with some built-in tools, and allow the user to customize their own tools.
SPEAR relies on some other third-party software dependency packages. To install this packages on Linux, use the following command:
python -m pip install --upgrade pip
pip install build
apt install portaudio19-dev libx11-dev libxtst-dev
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
To build SPEAR and its related components, run the following command:
make
This command will:
- Compile all required binaries.
- Build Docker images for the related AI Agent workloads.
To run SPEAR in local mode, use the following command:
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
export HUGGINGFACEHUB_API_TOKEN=<YOUR_HUGGINGFACEHUB_API_TOKEN>
export SPEAR_RPC_ADDR=<YOUR_LOCAL_SPEAR_RPC_ADDR>
bin/worker exec -n pyconversation
This command will:
- Start the SPEAR worker process in local mode.
- Run the AI agent workload with an ID of 6. (pyconversation-local)
Also, you need to set the environment variable OPENAI_API_KEY
to your OpenAI API key. In the future, we will support other LLM providers.
PortAudio is required for the audio processing component. To install PortAudio on MacOS, use the following command:
brew install portaudio
To build SPEAR and its related components, run the following command:
make
This command will:
- Compile all required binaries.
- Build Docker images for the related AI Agent workloads.
To run SPEAR in local mode, use the following command:
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
bin/worker exec -n pyconversation
This command will:
- Start the SPEAR worker process in local mode.
- Run the AI agent workload with an ID of 6. (pyconversation-local)
Also, you need to set the environment variable OPENAI_API_KEY
to your OpenAI API key. In the future, we will support other LLM providers.
Supported Runtimes:
- Process
- Docker Container
Planned Runtimes:
- WebAssembly
- Kubernetes
- Implementation of cluster mode to enable distributed AI agent workloads across multiple clusters.
- Expansion of runtime support to include WebAssembly and Kubernetes.
Contributions are welcome! Please open an issue or submit a pull request to discuss new features, bug fixes, or enhancements.
This project is licensed under the Apache License 2.0.