Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
bhancockio committed Oct 28, 2024
1 parent 71f0f20 commit 86bc90b
Showing 1 changed file with 46 additions and 22 deletions.
68 changes: 46 additions & 22 deletions self_evalulation_loop_flow/README.md
Original file line number Diff line number Diff line change
@@ -1,52 +1,76 @@
# {{crew_name}} Crew
# Self Evaluation Loop Flow

Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
Welcome to the Self Evaluation Loop Flow project, powered by [crewAI](https://crewai.com). This project showcases a powerful pattern in AI workflows: automatic self-evaluation. By leveraging crewAI's multi-agent system, this flow demonstrates how to set up a Crew that evaluates the responses of other Crews, iterating with feedback to improve results.

## Installation
## Overview

Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
This flow guides you through setting up an automated self-evaluation system using two main Crews: the `ShakespeareanXPostCrew` and the `XPostReviewCrew`. The process involves the following steps:

First, if you haven't already, install uv:
1. **Generate Initial Output**: The `ShakespeareanXPostCrew` generates an initial Shakespearean-style post (X post) on a given topic, such as "Flying cars". This post is crafted to be humorous and playful, adhering to specific character limits and style guidelines.

```bash
pip install uv
```
2. **Evaluate Output**: The `XPostReviewCrew` evaluates the generated post to ensure it meets the required criteria, such as character count and absence of emojis. The crew provides feedback on the post's validity and quality.

3. **Iterate with Feedback**: If the post does not meet the criteria, the flow iterates by regenerating the post with the feedback provided. This iterative process continues until the post is valid or a maximum retry limit is reached.

4. **Finalize and Save**: Once the post is validated, it is finalized and saved for further use. If the maximum retry count is exceeded without achieving a valid post, the flow exits with the last generated post and feedback.

Next, navigate to your project directory and install the dependencies:
This pattern of automatic self-evaluation is crucial for developing robust AI systems that can adapt and improve over time, ensuring high-quality outputs through iterative refinement.

## Installation

Ensure you have Python >=3.10 <=3.13 installed on your system.

To install CrewAI, run the following command:

(Optional) Lock the dependencies and install them by using the CLI command:
```bash
crewai install
pip install crewai
```

This command will install CrewAI and its necessary dependencies, allowing you to start building and managing AI agents efficiently.

### Customizing

**Add your `OPENAI_API_KEY` into the `.env` file**

- Modify `src/flow_self_evalulation_loop/config/agents.yaml` to define your agents
- Modify `src/flow_self_evalulation_loop/config/tasks.yaml` to define your tasks
- Modify `src/flow_self_evalulation_loop/crew.py` to add your own logic, tools and specific args
- Modify `src/flow_self_evalulation_loop/main.py` to add custom inputs for your agents and tasks
- Modify `src/flow_self_evalulation_loop/config/agents.yaml` to define your agents.
- Modify `src/flow_self_evalulation_loop/config/tasks.yaml` to define your tasks.
- Modify `src/flow_self_evalulation_loop/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/flow_self_evalulation_loop/main.py` to add custom inputs for your agents and tasks.

## Running the Project

To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:

```bash
crewai run
crewai flow kickoff
```

This command initializes the flow-self-evalulation-loop Crew, assembling the agents and assigning them tasks as defined in your configuration.

This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
This command initializes the self-evaluation loop flow, assembling the agents and assigning them tasks as defined in your configuration.

The unmodified example will generate a `report.md` file with the output of a research on LLMs in the root folder.

## Understanding Your Flow

The self-evaluation loop flow is composed of 2 Crews. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your flow.

This flow is centered around two major Crews: the `ShakespeareanXPostCrew` and the `XPostReviewCrew`. The `ShakespeareanXPostCrew` is responsible for generating a Shakespearean-style post (X post) on a given topic, while the `XPostReviewCrew` evaluates the generated post to ensure it meets specific criteria. The process is iterative, using feedback from the review to refine the post until it is valid or a maximum retry limit is reached.

### Flow Structure

1. **Generate Initial Output**: A Crew generates the initial output based on predefined criteria.

2. **Evaluate Output**: Another Crew evaluates the output, providing feedback on its validity and quality.

3. **Iterate with Feedback**: If necessary, the initial Crew is re-run with feedback to improve the output.

## Understanding Your Crew
4. **Finalize and Save**: Once validated, the output is saved for further use.

The flow-self-evalulation-loop Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
By understanding the flow structure, you can see how multiple Crews are orchestrated to work together, each handling a specific part of the self-evaluation process. This modular approach allows for efficient and scalable automation.

## Support

For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
For support, questions, or feedback regarding the Self Evaluation Loop Flow or crewAI:

- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
Expand Down

0 comments on commit 86bc90b

Please sign in to comment.