Skip to content

Commit

Permalink
update readme with python version
Browse files Browse the repository at this point in the history
  • Loading branch information
cornelcroi committed Aug 14, 2024
1 parent b2acec6 commit 00e74bc
Show file tree
Hide file tree
Showing 3 changed files with 160 additions and 6 deletions.
91 changes: 85 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@
## 🔖 Features

- 🧠 **Intelligent Intent Classification** — Dynamically route queries to the most suitable agent based on context and content.
- 🔤 **Dual Language Support** — Fully implemented in both **Python** and **TypeScript**, allowing developers to choose their preferred language.
- 🌊 **Flexible Agent Responses** — Support for both streaming and non-streaming responses from different agents.
- 📚 **Context Management** — Maintain and utilize conversation context across multiple agents for coherent interactions.
- 🔧 **Extensible Architecture** — Easily integrate new agents or customize existing ones to fit your specific needs.
- 🌐 **Universal Deployment** — Run anywhere - from AWS Lambda to your local environment or any cloud platform.
- 📦 **Pre-built Agents and Classifiers** — A variety of ready-to-use agents and multiple classifier implementations available.
- 🔤 **TypeScript Support** — Native TypeScript implementation available.

## What's the Multi-Agent Orchestrator ❓

Expand All @@ -24,7 +24,6 @@ The system offers pre-built components for quick deployment, while also allowing

This adaptability makes it suitable for a wide range of applications, from simple chatbots to sophisticated AI systems, accommodating diverse requirements and scaling efficiently.


## 🏗️ High-level architecture flow diagram

<br /><br />
Expand All @@ -33,13 +32,11 @@ This adaptability makes it suitable for a wide range of applications, from simpl

<br /><br />


1. The process begins with user input, which is analyzed by a Classifier.
2. The Classifier leverages both Agents' Characteristics and Agents' Conversation history to select the most appropriate agent for the task.
3. Once an agent is selected, it processes the user input.
4. The orchestrator then saves the conversation, updating the Agents' Conversation history, before delivering the response back to the user.


## 💬 Demo App

To quickly get a feel for the Multi-Agent Orchestrator, we've provided a Demo App with a few basic agents. This interactive demo showcases the orchestrator's capabilities in a user-friendly interface. To learn more about setting up and running the demo app, please refer to our [Demo App](https://awslabs.github.io/multi-agent-orchestrator/deployment/demo-web-app/) section.
Expand Down Expand Up @@ -150,13 +147,95 @@ if (response.streaming == true) {
}
```

This example showcases:
### Python Version

#### Installation

```bash
pip install multi-agent-orchestrator
```

#### Usage

Here's an equivalent Python example demonstrating the use of the Multi-Agent Orchestrator with a Bedrock LLM Agent and a Lex Bot Agent:

```python
import os
import asyncio
from multi_agent_orchestrator import MultiAgentOrchestrator
from multi_agent_orchestrator.agents import BedrockLLMAgent, LexBotAgent
from multi_agent_orchestrator.agents import BedrockLLMAgentOptions, LexBotAgentOptions

orchestrator = MultiAgentOrchestrator()

# Add a Bedrock LLM Agent with Converse API support
orchestrator.add_agent(
BedrockLLMAgent(BedrockLLMAgentOptions(
name="Tech Agent",
description="Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services.",
streaming=True
))
)

# Add a Lex Bot Agent for handling travel-related queries
orchestrator.add_agent(
LexBotAgent(LexBotAgentOptions(
name="Travel Agent",
description="Helps users book and manage their flight reservations",
bot_id=os.environ.get('LEX_BOT_ID'),
bot_alias_id=os.environ.get('LEX_BOT_ALIAS_ID'),
locale_id="en_US",
))
)

async def main():
# Example usage
response = await orchestrator.route_request(
"I want to book a flight",
'user123',
'session456'
)

# Handle the response (streaming or non-streaming)
if response.streaming:
print("\n** RESPONSE STREAMING ** \n")
# Send metadata immediately
print(f"> Agent ID: {response.metadata.agent_id}")
print(f"> Agent Name: {response.metadata.agent_name}")
print(f"> User Input: {response.metadata.user_input}")
print(f"> User ID: {response.metadata.user_id}")
print(f"> Session ID: {response.metadata.session_id}")
print(f"> Additional Parameters: {response.metadata.additional_params}")
print("\n> Response: ")

# Stream the content
async for chunk in response.output:
if isinstance(chunk, str):
print(chunk, end='', flush=True)
else:
print(f"Received unexpected chunk type: {type(chunk)}", file=sys.stderr)

else:
# Handle non-streaming response (AgentProcessingResult)
print("\n** RESPONSE ** \n")
print(f"> Agent ID: {response.metadata.agent_id}")
print(f"> Agent Name: {response.metadata.agent_name}")
print(f"> User Input: {response.metadata.user_input}")
print(f"> User ID: {response.metadata.user_id}")
print(f"> Session ID: {response.metadata.session_id}")
print(f"> Additional Parameters: {response.metadata.additional_params}")
print(f"\n> Response: {response.output}")

if __name__ == "__main__":
asyncio.run(main())
```

These examples showcase:
1. The use of a Bedrock LLM Agent with Converse API support, allowing for multi-turn conversations.
2. Integration of a Lex Bot Agent for specialized tasks (in this case, travel-related queries).
3. The orchestrator's ability to route requests to the most appropriate agent based on the input.
4. Handling of both streaming and non-streaming responses from different types of agents.


## 🤝 Contributing

We welcome contributions! Please see our [Contributing Guide](https://raw.githubusercontent.com/awslabs/multi-agent-orchestrator/main/CONTRIBUTING.md) for more details.
Expand Down
1 change: 1 addition & 0 deletions docs/src/content/docs/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ import { Card, CardGrid } from '@astrojs/starlight/components';
## Key Features

- **Multi-Agent Orchestration**: Seamlessly coordinate and leverage multiple AI agents in a single system
- **Dual Language Support**: Fully implemented in both **Python** and **TypeScript**, allowing developers to choose their preferred language
- **Intelligent intent classification**: Dynamically route queries to the most suitable agent based on context and content
- **Flexible agent responses**: Support for both streaming and non-streaming responses from different agents
- **Context management**: Maintain and utilize conversation context across multiple agents for coherent interactions
Expand Down
74 changes: 74 additions & 0 deletions python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ Click on the image below to see a screen recording of the demo app on the GitHub
Check out our [documentation](https://awslabs.github.io/multi-agent-orchestrator/) for comprehensive guides on setting up and using the Multi-Agent Orchestrator!



### Installation

```bash
Expand All @@ -79,6 +80,79 @@ pip install multi-agent-orchestrator

### Usage

Here's an equivalent Python example demonstrating the use of the Multi-Agent Orchestrator with a Bedrock LLM Agent and a Lex Bot Agent:

```python
import os
import asyncio
from multi_agent_orchestrator import MultiAgentOrchestrator
from multi_agent_orchestrator.agents import BedrockLLMAgent, LexBotAgent
from multi_agent_orchestrator.agents import BedrockLLMAgentOptions, LexBotAgentOptions

orchestrator = MultiAgentOrchestrator()

# Add a Bedrock LLM Agent with Converse API support
orchestrator.add_agent(
BedrockLLMAgent(BedrockLLMAgentOptions(
name="Tech Agent",
description="Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services.",
streaming=True
))
)

# Add a Lex Bot Agent for handling travel-related queries
orchestrator.add_agent(
LexBotAgent(LexBotAgentOptions(
name="Travel Agent",
description="Helps users book and manage their flight reservations",
bot_id=os.environ.get('LEX_BOT_ID'),
bot_alias_id=os.environ.get('LEX_BOT_ALIAS_ID'),
locale_id="en_US",
))
)

async def main():
# Example usage
response = await orchestrator.route_request(
"I want to book a flight",
'user123',
'session456'
)

# Handle the response (streaming or non-streaming)
if response.streaming:
print("\n** RESPONSE STREAMING ** \n")
# Send metadata immediately
print(f"> Agent ID: {response.metadata.agent_id}")
print(f"> Agent Name: {response.metadata.agent_name}")
print(f"> User Input: {response.metadata.user_input}")
print(f"> User ID: {response.metadata.user_id}")
print(f"> Session ID: {response.metadata.session_id}")
print(f"> Additional Parameters: {response.metadata.additional_params}")
print("\n> Response: ")

# Stream the content
async for chunk in response.output:
if isinstance(chunk, str):
print(chunk, end='', flush=True)
else:
print(f"Received unexpected chunk type: {type(chunk)}", file=sys.stderr)

else:
# Handle non-streaming response (AgentProcessingResult)
print("\n** RESPONSE ** \n")
print(f"> Agent ID: {response.metadata.agent_id}")
print(f"> Agent Name: {response.metadata.agent_name}")
print(f"> User Input: {response.metadata.user_input}")
print(f"> User ID: {response.metadata.user_id}")
print(f"> Session ID: {response.metadata.session_id}")
print(f"> Additional Parameters: {response.metadata.additional_params}")
print(f"\n> Response: {response.output}")

if __name__ == "__main__":
asyncio.run(main())
```

The following example demonstrates how to use the Multi-Agent Orchestrator with two different types of agents: a Bedrock LLM Agent with Converse API support and a Lex Bot Agent. This showcases the flexibility of the system in integrating various AI services.

```python
Expand Down

0 comments on commit 00e74bc

Please sign in to comment.