Skip to content

Prompt Sequencing

Dewayne VanHoozer edited this page Feb 26, 2024 · 1 revision

Sometimes its just necessary to run a sequence of prompts. "Why? isn't that the same thing as chatting?" Yes it is; but, the difference is that its being done within a batch environment. You may use a chat session to do the engineering on each of the prompts you intend to use in your multi-step process which you want to repeat to receive similar results.

One use case might be to have an audio file trascribed into a transcript. The next prompt would be to take the transcript and do a summary of the conversation.

Another use case may be that your prompts or context files are busting the token count limits of the backend models either on input or on output. By using a multi-step prompt sequence each prompt/response cycles stays within the token limits of the models you are using.

Knowing that you can get lies from an LLM, maybe you send the same prompt to two different models in two different prompt files. Then you use a third prompt to compare the responses of the first two to note the differences are to find authorative references which can be checked. After all, as a lawer you never want to give a brief to the judge that has made up references for cases that are fake. Right? If course right!

It is trivial to write a shell script that executes a sequences of prompts like this:

aia one -o temp.md
aia two temp.md -o temp.md
aia three temp.md -o temp.md
aia four temp.md -o temp.md
#
echo "The final response is:"
cat temp.md

Even more simple when you realize that temp.md is the default out_file so you do not have to use "-o temp.md" on each step.

To maintain the different prompt responses for troubleshooting or other purposes you could of course change the out_file to be different for each of the different steps in the sequence making sure that the out_file on a prompt is fed into the next prompt as part of its context.

aia one -o one.md
aia two one.md -o two.md
aia three two.md -o three.md
aia four three.md -o four.md
#
echo "The final response is:"
cat four.md

We can do better using the --next and --pipeline command line options. Since temp.md is the default out_file you can accomplishexactly the same prompt sequencing as above with just this:

aia one --pipeline two,three,four
echo "... and the answer is:"
cat temp.md

To override the default out_file you can use a directive within each prompt file. For example in the prompt file one.txt you would use the directive //config out_file one.md doing the same kind of thing for each or the prompt files used in the sequence.

If you only have a sequence of two prompts use the --next command line option.

aia one --next two

You cannot use both --next and --pipeline at the same time.

As command line options both --next and --pipeline have the same capabilities as all other command line options. They can be set with sustem environment variables for example. Use $AIA_NEXT and $AIA_PIPELINE to set their values so that you do not have to remember to use the options on the command line.

They can also be used with the //config directive inside prompt files. For example in the prompt file one.txt you could use the directive //config next two to specify that the next prompt in the sequence after one is two. Or use the directive //config pipeline two,three,four to layout the entire sequence in the first prompt file.

Since these directives are inside the prompt file you can take advantage of parameterization, shell scripting, environment variables and ERB to generate the actual text of the directive. For shell integration or ERB embedding you will have to use the --shell or --erb or both command line options. This allows you to make the prompt sequence conditional or even change the order of steps within the sequence based upon your needs.

A use case for this kind of capability may be that you are using a model that updated daily. You have a sequence of prompts that you want to fire when the first prompt contains some magic words.

Your first prompt gets things started

//config next analyze
//cibfug out_file current_info.md
what are the current xyzzy?

Then in the analyze.txt prompt file you might have something line this:

<% 
 current_info = File.open('current_info.md','r').read
 if current_info.includes?('something')
%>
//config next report_one
<% elsif current_info.include?('something else') %>
//config pipeline report_two,report_three
<% else %>
//include some_file.txt
<% end %>

format your response as a power point presentation.
Clone this wiki locally