Skip to content
Peter Marsh edited this page Feb 16, 2024 · 53 revisions

Prompt-injection, lovingly called Spy Logic, is a learning game that teaches users prompt injection attacks and defences, through a series of increasingly difficult challenges.

This wiki is intended to be very high level, and act more as a signposting system to specific parts of code. Read the code alongside the wiki to get the fullest understanding, or just read the docs to get a broad-strokes overview of how things work.

To get acquainted with the project, the best way is to run and play the game. Failing that you can have a read of intro to Spy Logic.

Nomenclature

In these docs I will use the term LLM to describe a generic Large Language Model, usually one created and hosted by OpenAI. In contrast, I will use bot to describe ScottBrewBot, which is our fictional integrated chatbot. It describes the system as a whole, which includes an LLM at its core, but also includes the mechanisms we've placed around it (defences, error handling, etc).

How we Use LLMs

Large Language Models lie at the core of the game. We use OpenAI's service to access models hosted on their servers. We use LLMs to

  • respond to user messages
  • query documents
  • evaluate whether a prompt is malicious

See OpenAI to understand how we get responses. See Langchain to see how we query documents and evaluate prompts.

image

Levels and Game Flow

  • What changes between them?
  • Winning a level?

Defences

Session and Memory