Skip to content

jw-source/LlamaSim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Simulate human behavior with mass LLMs

🔗 Demo Video   •   🐦 Twitter   •   Cerebras Blog

LlamaSim:

LlamaSim is a multi-LLM framework that aims to simulate human behavior at scale. Given a specific environment (e.g., voters in Pennsylvania, students at CMU), we replicate target groups, and aim to provide actionable insights for important questions/events.

More to come...

Roadmap

  • Gradio Frontend (Local Demo)
  • Supports mem0 for memory (alt_... files) - working on stability
  • Rewrite Agent Generatation using Cerebras instead of OpenAI
  • Demographically Aligned Agents on-the-fly
  • Live Data Feeds for Agents
  • Async Communication for Agents
  • Live Demo

Usage:

# Clone the repository
git clone https://github.com/jw-source/LlamaSim

NOTE: files that start with "alt..." are mem0 implementations of the original code (currently improving stability)

# Add API keys to .env
mv env.txt .env

# Create venv
python3 -m venv .venv

# Set the venv
source .venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Run
cd src
python run.py

Real Example

from network import Network
agent_network = Network(population="Pennsylvania Voters", num_agents=5, max_context_size=4000)
prompt = "Gas prices are an all-time high."
question = "Are you voting for Kamala Harris?"
agent_network.group_chat(prompt, "random", max_rounds=1)
agent_network.predict(prompt, question)