Welcome to the Scott Logic prompt injection open source project! As generative AI and LLMs become more prevalent, it becomes more important to learn about the dangers of prompt injection. This project aims to teach people about prompt injection attacks that can be used on generative AI, and how to defend against such attacks.
This project is presented in two modes:
Go undercover and use prompt injection attacks on ScottBrewBot, a clever but flawed generative AI bot. Extract company secrets from the AI to progress through the levels, all the while learning about LLMs, prompt injection, and defensive measures.
Activate and configure a number of different prompt injection defence measures to create your own security system. Then talk to the AI and try to crack it!
Ensure you have Node v18+ installed. Clone this repo and run
npm ci
- Copy the example environment file .env.example in the backend directory and rename it to
.env
. - Replace the
OPENAI_API_KEY
value in the.env
file with your OpenAI API key. - Replace the
SESSION_SECRET
value with a random UUID.
- Copy the example environment file .env.example in the frontend directory and rename it to
.env
. - If you've changed the default port for running the API, adjust the
VITE_BACKEND_URL
value accordingly.
npm run start:api
npm run start:ui
Note that this project also includes a VS Code launch file, to allow running API and UI directly from the IDE.
For development instructions, see the frontend and backend READMEs.
Thank you for considering contributing to this open source project!
Please read the our contributing guide and our code of conduct before contributing.