Skip to content

Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.

License

Notifications You must be signed in to change notification settings

AAloshine-scottlogic/prompt-injection

 
 

Repository files navigation

prompt-injection

Introduction

Welcome to the Scott Logic prompt injection open source project! As generative AI and LLMs become more prevalent, it becomes more important to learn about the dangers of prompt injection. This project aims to teach people about prompt injection attacks that can be used on generative AI, and how to defend against such attacks.

This project is presented in two modes:

Story mode

Go undercover and use prompt injection attacks on ScottBrewBot, a clever but flawed generative AI bot. Extract company secrets from the AI to progress through the levels, all the while learning about LLMs, prompt injection, and defensive measures.

Sandbox mode

Activate and configure a number of different prompt injection defence measures to create your own security system. Then talk to the AI and try to crack it!

Installation

Ensure you have Node v18+ installed. Clone this repo and run

npm ci

Setup

Environment variables

Backend

  1. Copy the example environment file .env.example in the backend directory and rename it to .env.
  2. Replace the OPENAI_API_KEY value in the .env file with your OpenAI API key.
  3. Replace the SESSION_SECRET value with a random UUID.

Frontend

  1. Copy the example environment file .env.example in the frontend directory and rename it to .env.
  2. If you've changed the default port for running the API, adjust the VITE_BACKEND_URL value accordingly.

Run

Backend

npm run start:api

Frontend

npm run start:ui

Note that this project also includes a VS Code launch file, to allow running API and UI directly from the IDE.

Development

For development instructions, see the frontend and backend READMEs.

Contributing

Contributor Covenant

Thank you for considering contributing to this open source project!

Please read the our contributing guide and our code of conduct before contributing.

License

MIT

About

Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 88.0%
  • CSS 10.0%
  • JavaScript 1.9%
  • Other 0.1%