Skip to content

Latest commit

 

History

History
8 lines (5 loc) · 720 Bytes

cooking-ai.md

File metadata and controls

8 lines (5 loc) · 720 Bytes

Cooking AI

I recently came across this app: Cooking AI, I had previously made a similar app FeedMe. I was curious how this would perform.

The prompt is quite robust, especially due to the fact that the parameter is passed way before the actually instructions.

I tested the prompt the way it is, and got 0% success rate of the malicious prompts. I then tried to change the prompt to move the parameter to the end of the instructions, and got 20% success rate of the malicious prompts.

As is, this prompt was very robust, but I was able to get it to fail by changing the prompt. I think this is a good example of how to make a robust prompt.