Skip to content

Commit

Permalink
[Release] version 0.1.2
Browse files Browse the repository at this point in the history
What's changed:

- Refactored the `chatbot/chatui.py` to consolidate methods into `ask_model()`.
- Updated the element names in the `result.html` template.
- Cleaned up the code related to reading and replacing custom error messages.
- Added more comments describing each prompt in the `ask_model()` method.
- Moved the role of `condition.txt` to `config.yaml`.
- Updated the Docs Agent module to read custom prompt text from `config.yaml`.
- Removed the `condition.txt` file.
- Re-generated the `poetry.lock` file to be up to date.

October 6, 2023
  • Loading branch information
kyolee415 committed Oct 6, 2023
1 parent bec6573 commit 7ea6bb6
Show file tree
Hide file tree
Showing 8 changed files with 926 additions and 824 deletions.
32 changes: 16 additions & 16 deletions demos/palm/python/docs-agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ content from the source documents given user questions.
Once the most relevant content is returned, the Docs Agent server uses the prompt structure
shown in Figure 3 to augment the user question with a preset **condition** and a list of
**context**. (When the Docs Agent server starts, the condition value is read from the
[`condition.txt`][condition-txt] file.) Then the Docs Agent server sends this prompt to a
[`config.yaml`][condition-txt] file.) Then the Docs Agent server sends this prompt to a
PaLM 2 model using the PaLM API and receives a response generated by the model.

![Docs Agent prompt strcture](docs/images/docs-agent-prompt-structure-01.png)
Expand Down Expand Up @@ -202,16 +202,16 @@ by the PaLM model:
- Additional condition (for fact-checking):

```
Compare the following body of text to the context provided in this prompt and write
a short message that warns the readers about which part of the text below they
should consider fact-checking for themselves? (please keep your response concise and
mention only one important point):
Can you compare the following text to the context provided in this prompt and write
a short message that warns the readers about which part of the text they should
consider fact-checking? (Please keep your response concise and focus on only
one important item.)
```

- Previously generated response

```
<RESPONSE_RETURNED_FROM_THE_PREVIOUS_PROMPT>
Text: <RESPONSE_RETURNED_FROM_THE_PREVIOUS_PROMPT>
```

This "fact-checking" prompt returns a response similar to the following example:
Expand Down Expand Up @@ -542,15 +542,7 @@ allowing you to easily bring up and destory the Flask app instance.

To customize settings in the Docs Agent chat app, do the following:

1. (**Optional**) Update the `condition.txt` file to provide a more specific prompt condition
for your custom dataset, for example:

```
You are a helpful chatbot answering questions from developers working on Flutter apps.
Read the following context first and answer the question at the end:
```

2. Edit the `config.yaml` file to update the following field:
1. Edit the `config.yaml` file to update the following field:

```
product_name: "My product"
Expand All @@ -563,6 +555,14 @@ To customize settings in the Docs Agent chat app, do the following:
product_name: "Flutter"
```

2. (**Optional**) Edit the `config.yaml` file to provide a more specific prompt
condition for your custom dataset, for example:

```
condition_text: "You are a helpful chatbot answering questions from developers working on
Flutter apps. Read the following context first and answer the question at the end:"
```

### 2. Launch the Docs Agent chat app

To launch the Docs Agent chat app, do the following:
Expand Down Expand Up @@ -636,7 +636,7 @@ Meggin Kearney (`@Meggin`), and Kyo Lee (`@kyolee415`).
[set-up-docs-agent]: #set-up-docs-agent
[markdown-to-plain-text]: ./scripts/markdown_to_plain_text.py
[populate-vector-database]: ./scripts/populate_vector_database.py
[condition-txt]: ./condition.txt
[condition-txt]: ./config.yaml
[context-source-01]: http://eventhorizontelescope.org
[fact-check-section]: #using-a-palm-2-model-to-fact-check-its-own-response
[related-questions-section]: #using-a-palm-2-model-to-suggest-related-questions
Expand Down
227 changes: 95 additions & 132 deletions demos/palm/python/docs-agent/chatbot/chatui.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,152 +112,115 @@ def rewrite():
return redirect(url_for("chatui.index"))


# Render a response page when the user asks a question
# using input text box.
@bp.route("/result", methods=["GET", "POST"])
def result():
if request.method == "POST":
uuid_value = uuid.uuid1()
question_captured = request.form["question"]
query_result = docs_agent.query_vector_store(question_captured)
context = markdown.markdown(query_result.fetch_formatted(Format.CONTEXT))
context_with_prefix = docs_agent.add_instruction_to_context(context)
response_in_markdown = docs_agent.ask_text_model_with_context(
context_with_prefix, question_captured
)
if response_in_markdown is None:
response_in_markdown = (
"The PaLM API is not able to answer this question at the moment. "
"Try to rephrase the question and ask again."
)
response_in_html = markdown.markdown(response_in_markdown)
metadatas = markdown.markdown(
query_result.fetch_formatted(Format.CLICKABLE_URL)
)
fact_checked_answer_in_markdown = docs_agent.ask_text_model_to_fact_check(
context_with_prefix, response_in_markdown
)
if fact_checked_answer_in_markdown is None:
fact_checked_answer_in_markdown = (
"The PaLM API is not able to answer this question at the moment. "
"Try to rephrase the question and ask again."
)
fact_checked_answer_in_html = markdown.markdown(fact_checked_answer_in_markdown)
new_question = (
"What are 5 questions developers might ask after reading the context?"
)
related_questions = markdown.markdown(
docs_agent.ask_text_model_with_context(response_in_markdown, new_question)
)
soup = BeautifulSoup(related_questions, "html.parser")
for item in soup.find_all("li"):
if item.string is not None:
link = soup.new_tag(
"a",
href=url_for(
"chatui.question", ask=urllib.parse.quote_plus(item.string)
),
)
link.string = item.string
item.string = ""
item.append(link)
related_questions = soup
fact_link = markdown.markdown(
query_result.fetch_nearest_formatted(Format.CLICKABLE_URL)
)
server_url = request.url_root.replace("http", "https")
# Log the question and response to the log file.
log_question(uuid_value, question_captured, response_in_markdown)
return render_template(
"chatui/index.html",
question=question_captured,
context=context,
context_with_prefix=context_with_prefix,
response_in_markdown=response_in_markdown,
response_in_html=response_in_html,
product=product,
metadatas=metadatas,
fact_checked_answer=fact_checked_answer_in_html,
fact_link=fact_link,
related_questions=related_questions,
server_url=server_url,
uuid=uuid_value,
)
question = request.form["question"]
return ask_model(question)
else:
return redirect(url_for("chatui.index"))


# Render a response page when the user clicks a question
# from the related questions list.
@bp.route("/question/<ask>", methods=["GET", "POST"])
def question(ask):
if request.method == "GET":
uuid_value = uuid.uuid1()
question_captured = urllib.parse.unquote_plus(ask)
query_result = docs_agent.query_vector_store(question_captured)
context = markdown.markdown(query_result.fetch_formatted(Format.CONTEXT))
context_with_prefix = docs_agent.add_instruction_to_context(context)
response_in_markdown = docs_agent.ask_text_model_with_context(
context_with_prefix, question_captured
)
if response_in_markdown is None:
response_in_markdown = (
"The PaLM API is not able to answer this question at the moment. "
"Try to rephrase the question and ask again."
)
response_in_html = markdown.markdown(response_in_markdown)
metadatas = markdown.markdown(
query_result.fetch_formatted(Format.CLICKABLE_URL)
)
fact_checked_answer_in_markdown = docs_agent.ask_text_model_to_fact_check(
context_with_prefix, response_in_markdown
)
if fact_checked_answer_in_markdown is None:
fact_checked_answer_in_markdown = (
"The PaLM API is not able to answer this question at the moment. "
"Try to rephrase the question and ask again."
)
fact_checked_answer_in_html = markdown.markdown(fact_checked_answer_in_markdown)
new_question = (
"What are 5 questions developers might ask after reading the context?"
)
related_questions = markdown.markdown(
docs_agent.ask_text_model_with_context(response_in_markdown, new_question)
)
soup = BeautifulSoup(related_questions, "html.parser")
for item in soup.find_all("li"):
if item.string is not None:
link = soup.new_tag(
"a",
href=url_for(
"chatui.question", ask=urllib.parse.quote_plus(item.string)
),
)
link.string = item.string
item.string = ""
item.append(link)
related_questions = soup
fact_link = markdown.markdown(
query_result.fetch_nearest_formatted(Format.CLICKABLE_URL)
)
server_url = request.url_root.replace("http", "https")
# Log the question and response to the log file.
log_question(uuid_value, question_captured, response_in_markdown)
return render_template(
"chatui/index.html",
question=question_captured,
context=context,
context_with_prefix=context_with_prefix,
response_in_markdown=response_in_markdown,
response_in_html=response_in_html,
product=product,
metadatas=metadatas,
fact_checked_answer=fact_checked_answer_in_html,
fact_link=fact_link,
related_questions=related_questions,
server_url=server_url,
uuid=uuid_value,
)
question = urllib.parse.unquote_plus(ask)
return ask_model(question)
else:
return redirect(url_for("chatui.index"))


# Construct a set of prompts using the user question, send the prompts to
# the lanaguage model, receive responses, and present them into a page.
def ask_model(question):
### PROMPT 1: AUGMENT THE USER QUESTION WITH CONTEXT.
# 1. Use the question to retrieve a list of related contents from the database.
# 2. Convert the list of related contents into plain Markdown text (context).
# 3. Add the custom condition text to the context.
# 4. Send the prompt (condition + context + question) to the language model.
query_result = docs_agent.query_vector_store(question)
context = markdown.markdown(query_result.fetch_formatted(Format.CONTEXT))
context_with_prefix = docs_agent.add_instruction_to_context(context)
response = docs_agent.ask_text_model_with_context(context_with_prefix, question)

### PROMPT 2: FACT-CHECK THE PREVIOUS RESPONSE.
fact_checked_response = docs_agent.ask_text_model_to_fact_check(
context_with_prefix, response
)

### PROMPT 3: GET 5 RELATED QUESTIONS.
# 1. Prepare a new question asking the model to come up with 5 related questions.
# 2. Ask the language model with the new question.
# 3. Parse the model's response into a list in HTML format.
new_question = (
"What are 5 questions developers might ask after reading the context?"
)
new_response = markdown.markdown(
docs_agent.ask_text_model_with_context(response, new_question)
)
related_questions = parse_related_questions_response_to_html_list(new_response)

### RETRIEVE SOURCE URLS.
# - Construct clickable URLs using the returned related contents above.
# - Extract the URL of the top related content for the fact-check message.
clickable_urls = markdown.markdown(
query_result.fetch_formatted(Format.CLICKABLE_URL)
)
fact_check_url = markdown.markdown(
query_result.fetch_nearest_formatted(Format.CLICKABLE_URL)
)

### PREPARE OTHER ELEMENTS NEEDED BY UI.
# - Create a uuid for this request.
# - Convert the first response from the model into HTML for rendering.
# - Convert the fact-check response from the model into HTML for rendering.
# - A workaround to get the server's URL to work with the rewrite and like features.
new_uuid = uuid.uuid1()
response_in_html = markdown.markdown(response)
fact_checked_response_in_html = markdown.markdown(fact_checked_response)
server_url = request.url_root.replace("http", "https")

### LOG THIS REQUEST.
log_question(new_uuid, question, response)

return render_template(
"chatui/index.html",
question=question,
context=context,
response=response,
response_in_html=response_in_html,
clickable_urls=clickable_urls,
fact_checked_response_in_html=fact_checked_response_in_html,
fact_check_url=fact_check_url,
related_questions=related_questions,
product=product,
server_url=server_url,
uuid=new_uuid,
)


# Parse a response containing a list of related questions from the language model
# and convert it into an HTML-based list.
def parse_related_questions_response_to_html_list(response):
soup = BeautifulSoup(response, "html.parser")
for item in soup.find_all("li"):
if item.string is not None:
link = soup.new_tag(
"a",
href=url_for(
"chatui.question", ask=urllib.parse.quote_plus(item.string)
),
)
link.string = item.string
item.string = ""
item.append(link)
return soup


# Log the question and response to the server's log file.
def log_question(uid, user_question, response):
date_format = "%m/%d/%Y %H:%M:%S %Z"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ <h2>PaLM's answer</h2>
{{ response_in_html | safe }}
</span>
<h4>Important:</h4>
{{ fact_checked_answer | safe }}
{{ fact_checked_response_in_html | safe }}
<p id="fact-check-url">To verify this information, please check out:</p>
{{ fact_link | safe }}
{{ fact_check_url | safe }}
</div>
<div class="related-questions">
<h3>Related questions</h3>
Expand All @@ -25,7 +25,7 @@ <h2 class="handle">
{{ context | safe }}
<span class="reference-content">
<h4>Reference:</h4>
{{ metadatas | safe }}
{{ clickable_urls | safe }}
</span>
</div>
</section>
Expand All @@ -43,7 +43,7 @@ <h4 id="rewrite-response-header">PaLM's response:</h4>
{{ response_in_html | safe }}
</span>
<h4>Rewrite:</h4>
<textarea id="edit-text-area">{{ response_in_markdown | safe }}</textarea>
<textarea id="edit-text-area">{{ response | safe }}</textarea>
<label for="user-id">User ID:</label>
<input type="text" id="user-id" name="user-id" placeholder="Optional">
<br>
Expand Down
2 changes: 0 additions & 2 deletions demos/palm/python/docs-agent/condition.txt

This file was deleted.

Loading

0 comments on commit 7ea6bb6

Please sign in to comment.