Skip to content

Commit

Permalink
[Release] Docs Agent version 0.1.6
Browse files Browse the repository at this point in the history
What's changed:

- Update the prompt condition to be more specific and follow
  best practices in prompting.
- Enable the chatbot server to provide a custom condition string to
  the DocsAgent class.
- Bug fix: Provide a custom condition when asking for 5 related
  questions to the PaLM model.
- Add a new config variable to specify the log level to be "VERBOSE"
- Improve the rendering of code text and code blocks on the chat app UI.
- Rephrase the sentence that describes the page, section, and subsection
  structure of Markdown pages in the `markdown_to_plain_text.py` script.
- Update the pre-processing diagram to fix a typo (`appscripts` to `apps_script`)
  • Loading branch information
kyolee415 committed Dec 12, 2023
1 parent edcfefa commit 14b57cf
Show file tree
Hide file tree
Showing 11 changed files with 143 additions and 58 deletions.
19 changes: 10 additions & 9 deletions demos/palm/python/docs-agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ and is required that you have access to Google’s [PaLM API][genai-doc-site].
Keep in mind that this approach does not involve “fine-tuning” an LLM (large language model).
Instead, the Docs Agent sample app uses a mixture of prompt engineering and embedding techniques,
also known as Retrieval Augmented Generation (RAG), on top of a publicly available LLM model
like PaLM 2.
like PaLM 2.

![Docs Agent architecture](docs/images/docs-agent-architecture-01.png)

Expand Down Expand Up @@ -210,10 +210,10 @@ by the PaLM model:
- Additional condition (for fact-checking):

```
Can you compare the text below to the context provided
in this prompt above and write a short message that warns the readers about
which part of the text they should consider fact-checking? (Please keep your
response concise and focus on only one important item.)"
Can you compare the text below to the information provided in this prompt above
and write a short message that warns the readers about which part of the text they
should consider fact-checking? (Please keep your response concise and focus on only
one important item.)"
```

- Previously generated response
Expand Down Expand Up @@ -266,8 +266,7 @@ The following is the exact structure of this prompt:
- Condition:

```
You are a helpful chatbot answering questions from users. Read the following context first
and answer the question at the end:
Read the context below and answer the question at the end:
```

- Context:
Expand Down Expand Up @@ -578,8 +577,10 @@ To customize settings in the Docs Agent chat app, do the following:
condition for your custom dataset, for example:

```
condition_text: "You are a helpful chatbot answering questions from developers working on
Flutter apps. Read the following context first and answer the question at the end:"
condition_text: "You are a helpful chatbot answering questions from **Flutter app developers**.
Read the context below first and answer the user's question at the end.
In your answer, provide a summary in three or five sentences. (BUT DO NOT USE
ANY INFORMATION YOU KNOW ABOUT THE WORLD.)"
```

### 2. Launch the Docs Agent chat app
Expand Down
24 changes: 17 additions & 7 deletions demos/palm/python/docs-agent/chatbot/chatui.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
json,
)
import markdown
import markdown.extensions.fenced_code
from bs4 import BeautifulSoup
import urllib
import os
Expand Down Expand Up @@ -145,22 +146,31 @@ def ask_model(question):
query_result = docs_agent.query_vector_store(question)
context = query_result.fetch_formatted(Format.CONTEXT)
context_with_instruction = docs_agent.add_instruction_to_context(context)
response = docs_agent.ask_text_model_with_context(context_with_instruction, question)
response = docs_agent.ask_text_model_with_context(
context_with_instruction, question
)

### PROMPT 2: FACT-CHECK THE PREVIOUS RESPONSE.
fact_checked_response = docs_agent.ask_text_model_to_fact_check(
context_with_instruction, response
)

### PROMPT 3: GET 5 RELATED QUESTIONS.
# 1. Prepare a new question asking the model to come up with 5 related questions.
# 2. Ask the language model with the new question.
# 3. Parse the model's response into a list in HTML format.
# 1. Use the response from Prompt 1 as context and add a custom condition.
# 2. Prepare a new question asking the model to come up with 5 related questions.
# 3. Ask the language model with the new question.
# 4. Parse the model's response into a list in HTML format.
new_condition = "Read the context below and answer the user's question at the end."
new_context_with_instruction = docs_agent.add_custom_instruction_to_context(
new_condition, response
)
new_question = (
"What are 5 questions developers might ask after reading the context?"
)
new_response = markdown.markdown(
docs_agent.ask_text_model_with_context(response, new_question)
docs_agent.ask_text_model_with_context(
new_context_with_instruction, new_question
)
)
related_questions = parse_related_questions_response_to_html_list(new_response)

Expand All @@ -181,8 +191,8 @@ def ask_model(question):
# - Convert the fact-check response from the model into HTML for rendering.
# - A workaround to get the server's URL to work with the rewrite and like features.
new_uuid = uuid.uuid1()
context_in_html = markdown.markdown(context)
response_in_html = markdown.markdown(response)
context_in_html = markdown.markdown(context, extensions=["fenced_code"])
response_in_html = markdown.markdown(response, extensions=["fenced_code"])
fact_checked_response_in_html = markdown.markdown(fact_checked_response)
server_url = request.url_root.replace("http", "https")

Expand Down
5 changes: 5 additions & 0 deletions demos/palm/python/docs-agent/chatbot/static/css/style.css
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,11 @@ li {
margin: 0 0 0.3em;
}

code {
font-family: math;
color: darkgreen;
}

/* ======= Style layout by ID ======= */

#callout-box {
Expand Down
43 changes: 23 additions & 20 deletions demos/palm/python/docs-agent/chroma.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,17 +47,25 @@ def __init__(self, chroma_dir) -> None:
def list_collections(self):
return self.client.list_collections()

def get_collection(self, name, embedding_function=None):
def get_collection(self, name, embedding_function=None, embedding_model=None):
if embedding_function is not None:
return ChromaCollection(
self.client.get_collection(name, embedding_function=embedding_function),
embedding_function,
)
# Read embedding meta information from the collection
collection = self.client.get_collection(name, lambda x: None)
embedding_model = None
if collection.metadata:
if embedding_model is None and collection.metadata:
embedding_model = collection.metadata.get("embedding_model", None)
if embedding_model is None:
# If embedding_model is not found in the metadata,
# use `models/embedding-gecko-001` by default.
logging.info(
"Embedding model is not specified in the metadata of "
"the collection %s. Using the default PaLM embedding model.",
name,
)
embedding_model = "models/embedding-gecko-001"

if embedding_model == "local/all-mpnet-base-v2":
base_dir = os.path.dirname(os.path.abspath(__file__))
Expand All @@ -67,24 +75,19 @@ def get_collection(self, name, embedding_function=None):
model_name=local_model_dir
)
)
elif embedding_model is None or embedding_model == "palm/embedding-gecko-001":
if embedding_model is None:
logging.info(
"Embedding model is not specified in the metadata of "
"the collection %s. Using the default PaLM embedding model.",
name,
)
palm = PaLM(embed_model="models/embedding-gecko-001", find_models=False)
# We can not redefine embedding_function with def and
# have to assign a lambda to it
# pylint: disable-next=unnecessary-lambda-assignment
embedding_function = lambda texts: [palm.embed(text) for text in texts]

else:
raise ChromaEmbeddingModelNotSupportedError(
f"Embedding model {embedding_model} specified by collection {name} "
"is not supported."
)
print("Embedding model: " + str(embedding_model))
try:
palm = PaLM(embed_model=embedding_model, find_models=False)
# We cannot redefine embedding_function with def and
# have to assign a lambda to it
# pylint: disable-next=unnecessary-lambda-assignment
embedding_function = lambda texts: [palm.embed(text) for text in texts]
except:
raise ChromaEmbeddingModelNotSupportedError(
f"Embedding model {embedding_model} specified by collection {name} "
"is not supported."
)

return ChromaCollection(
self.client.get_collection(name, embedding_function=embedding_function),
Expand Down
33 changes: 25 additions & 8 deletions demos/palm/python/docs-agent/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,16 @@

### Configuration for Docs Agent ###

### PaLM environment
#
# api_endpoint: The PaLM API endpoint used by Docs Agent.
#
# embedding_model: The PaLM embedding model used to generate embeddings.
#
api_endpoint: "generativelanguage.googleapis.com"
embedding_model: "models/embedding-gecko-001"


### Docs Agent environment
#
# product_name: The name of your product to appears on the chatbot UI.
Expand All @@ -31,10 +41,14 @@
# collection_name: The name used to identify a dataset collection by
# the Chroma vector database.
#
# log_level: The verbosity level of logs printed on the terminal
# by the chatbot app: NORMAL or VERBOSE
#
product_name: "My product"
output_path: "data/plain_docs"
vector_db_dir: "vector_stores/chroma"
collection_name: "docs_collection"
log_level: "NORMAL"


### Documentation sources
Expand Down Expand Up @@ -70,14 +84,17 @@ input:
# model_error_message: The error message returned to the user when language
# models are unable to provide responses.
#
condition_text: "You are a helpful chatbot answering questions from users. Read
the following context first and answer the question at the end:"
condition_text: "You are a helpful chatbot answering questions from users.
Read the context below first and answer the user's question at the end.
In your answer, provide a summary in three or five sentences. (BUT DO NOT USE
ANY INFORMATION YOU KNOW ABOUT THE WORLD.)"

fact_check_question: "Can you compare the text below to the context provided
in this prompt above and write a short message that warns the readers about
which part of the text they should consider fact-checking? (Please keep your
response concise and focus on only one important item.)"
fact_check_question: "Can you compare the text below to the information
provided in this prompt above and write a short message that warns the readers
about which part of the text they should consider fact-checking? (Please keep
your response concise, focus on only one important item, but DO NOT USE BOLD
TEXT IN YOUR RESPONSE.)"

model_error_message: "PaLM is not able to answer this question at the
moment. Rephrase the question and try asking again."
model_error_message: "PaLM is not able to answer this question at the moment.
Rephrase the question and try asking again."

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 38 additions & 5 deletions demos/palm/python/docs-agent/docs_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,16 @@

# Select your PaLM API endpoint.
PALM_API_ENDPOINT = "generativelanguage.googleapis.com"

palm = PaLM(api_key=API_KEY, api_endpoint=PALM_API_ENDPOINT)

BASE_DIR = os.path.dirname(os.path.abspath(__file__))
EMBEDDING_MODEL = None

# Set up the path to the chroma vector database.
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
LOCAL_VECTOR_DB_DIR = os.path.join(BASE_DIR, "vector_stores/chroma")
COLLECTION_NAME = "docs_collection"

# Set the log level for the DocsAgent class: NORMAL or VERBOSE
LOG_LEVEL = "NORMAL"

IS_CONFIG_FILE = True
if IS_CONFIG_FILE:
config_values = read_config.ReadConfig()
Expand All @@ -51,10 +52,16 @@
CONDITION_TEXT = config_values.returnConfigValue("condition_text")
FACT_CHECK_QUESTION = config_values.returnConfigValue("fact_check_question")
MODEL_ERROR_MESSAGE = config_values.returnConfigValue("model_error_message")
LOG_LEVEL = config_values.returnConfigValue("log_level")
PALM_API_ENDPOINT = config_values.returnConfigValue("api_endpoint")
EMBEDDING_MODEL = config_values.returnConfigValue("embedding_model")

# Select the number of contents to be used for providing context.
NUM_RETURNS = 5

# Initialize the PaLM instance.
palm = PaLM(api_key=API_KEY, api_endpoint=PALM_API_ENDPOINT)


class DocsAgent:
"""DocsAgent class"""
Expand All @@ -65,7 +72,9 @@ def __init__(self):
"Using the local vector database created at %s", LOCAL_VECTOR_DB_DIR
)
self.chroma = Chroma(LOCAL_VECTOR_DB_DIR)
self.collection = self.chroma.get_collection(COLLECTION_NAME)
self.collection = self.chroma.get_collection(
COLLECTION_NAME, embedding_model=EMBEDDING_MODEL
)
# Update PaLM's custom prompt strings
self.prompt_condition = CONDITION_TEXT
self.fact_check_question = FACT_CHECK_QUESTION
Expand All @@ -74,6 +83,9 @@ def __init__(self):
# Use this method for talking to PaLM (Text)
def ask_text_model_with_context(self, context, question):
new_prompt = f"{context}\n\nQuestion: {question}"
# Print the prompt for debugging if the log level is VERBOSE.
if LOG_LEVEL == "VERBOSE":
self.print_the_prompt(new_prompt)
try:
response = palm.generate_text(
prompt=new_prompt,
Expand Down Expand Up @@ -119,3 +131,24 @@ def add_instruction_to_context(self, context):
new_context = ""
new_context += self.prompt_condition + "\n\n" + context
return new_context

# Add custom instruction as a prefix to the context
def add_custom_instruction_to_context(self, condition, context):
new_context = ""
new_context += condition + "\n\n" + context
return new_context

# Generate an embedding given text input
def generate_embedding(self, text):
return palm.embed(text)

# Print the prompt on the terminal for debugging
def print_the_prompt(self, prompt):
print("#########################################")
print("# PROMPT #")
print("#########################################")
print(prompt)
print("#########################################")
print("# END OF PROMPT #")
print("#########################################")
print("\n")
2 changes: 1 addition & 1 deletion demos/palm/python/docs-agent/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "docs-agent"
version = "0.1.5"
version = "0.1.6"
description = ""
authors = ["Docs Agent contributors"]
readme = "README.md"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ def process_page_and_section_titles(markdown_text):
new_line = (
'# The "'
+ page_title
+ '" page contains the following content:\n\n'
+ '" page includes the following information:\n'
)

if section_title:
Expand All @@ -201,7 +201,7 @@ def process_page_and_section_titles(markdown_text):
+ page_title
+ '" page has the "'
+ section_title
+ '" section that contains the following content:\n'
+ '" section that includes the following information:\n'
)

if subsection_title:
Expand All @@ -212,7 +212,7 @@ def process_page_and_section_titles(markdown_text):
+ section_title
+ '" section has the "'
+ subsection_title
+ '" subsection that contains the following content:\n'
+ '" subsection that includes the following information:\n'
)

if skip_this_line is False:
Expand Down
Loading

0 comments on commit 14b57cf

Please sign in to comment.