Skip to content

Commit

Permalink
Re-enable documentation builds
Browse files Browse the repository at this point in the history
  • Loading branch information
slundberg committed Dec 19, 2023
1 parent 7f2b1b2 commit 8418ea6
Show file tree
Hide file tree
Showing 40 changed files with 2,667 additions and 2,984 deletions.
41 changes: 31 additions & 10 deletions docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,25 +5,46 @@ API Reference
This page contains the API reference for public objects and functions in Guidance.


.. _library_api:
.. _functions_api:

library
----------
functions
---------
.. autosummary::
:toctree: generated/

guidance.user
guidance.gen
guidance.select


.. _contexts_api:

context blocks
--------------
.. autosummary::
:toctree: generated/

guidance.instruction
guidance.system
guidance.user
guidance.assistant


.. _models_api:

models
-----
------
.. autosummary::
:toctree: generated/

guidance.Model
guidance.LlamaCpp
guidance.Transformers
guidance.VertexAI
guidance.OpenAI
guidance.models.Model
guidance.models.Instruct
guidance.models.Chat
guidance.models.LlamaCpp
guidance.models.Transformers
guidance.models.Remote
guidance.models.VertexAI
guidance.models.GoogleAI
guidance.models.OpenAI
guidance.models.LiteLLM
guidance.models.Cohere
guidance.models.Anthropic
22 changes: 11 additions & 11 deletions docs/api_examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,18 @@
.. _api_examples:

API Examples
----------------
------------

These examples parallel the namespace structure of Guidance. Each object or function in Guidance has a
corresponding example notebook here that demonstrates its API usage. The source notebooks
are `available on GitHub <https://github.com/microsoft/guidance/tree/master/notebooks/api_examples>`_.
are `available on GitHub <https://github.com/guidance-ai/guidance/tree/master/notebooks/api_examples>`_.


.. _library_examples:
.. _functions_examples:

library
==========
.. Examples for members of :ref:`guidance.library <library_api>`.
functions
=========
.. Examples for built-in guidance functions.
.. toctree::
:glob:
Expand All @@ -23,14 +23,14 @@ library
example_notebooks/api_examples/library/*


.. _llms_examples:
.. _models_examples:

llms
=======
.. Examples for members of :ref:`guidance.llms <llms_api>`.
models
======
.. Examples for members of :ref:`guidance.models <models_api>`.
.. toctree::
:glob:
:maxdepth: 1

example_notebooks/api_examples/llms/*
example_notebooks/api_examples/models/*
20 changes: 20 additions & 0 deletions docs/art_of_prompt_design.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
.. currentmodule:: guidance

.. _art_of_prompt_design:

The Art of Prompt Design
------------------------

These notebooks demonstrate how to design effective prompts and guidance programs, they also cover common useful
design patterns. The source notebooks are `available on GitHub <https://github.com/guidance-ai/guidance/tree/master/notebooks/art_of_prompt_design>`_.


.. toctree::
:glob:
:maxdepth: 1

example_notebooks/art_of_prompt_design/use_clear_syntax.ipynb
example_notebooks/art_of_prompt_design/prompt_boundaries_and_token_healing.ipynb
example_notebooks/art_of_prompt_design/tool_use.ipynb
example_notebooks/art_of_prompt_design/react.ipynb
example_notebooks/art_of_prompt_design/rag.ipynb
14 changes: 8 additions & 6 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,13 @@ Guidance can be installed from `PyPI <https://pypi.org/project/guidance>`_::
pip install guidance


.. Contents
.. ========
Contents
========

.. .. toctree::
.. :maxdepth: 2
.. toctree::
:maxdepth: 2

.. API reference <api>
.. API examples <api_examples>
Tutorials <tutorials>
API reference <api>
API examples <api_examples>
The Art of Prompt Design <art_of_prompt_design>
21 changes: 21 additions & 0 deletions docs/tutorials.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
.. currentmodule:: guidance

.. _tutorials:

Tutorials
----------------

These notebooks demonstrate various features of `guidance``. The source notebooks
are `available on GitHub <https://github.com/guidance-ai/guidance/tree/master/notebooks/api_examples>`_.


.. toctree::
:glob:
:maxdepth: 1

example_notebooks/tutorials/intro_to_guidance.ipynb
example_notebooks/tutorials/token_healing.ipynb
example_notebooks/tutorials/regex_constraints.ipynb
example_notebooks/tutorials/guidance_acceleration.ipynb
example_notebooks/tutorials/code_generation.ipynb
example_notebooks/tutorials/chat.ipynb
4 changes: 2 additions & 2 deletions guidance/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,8 +81,8 @@ def wrapped(*args, **kwargs):
return StatefulFunction(f, args, kwargs)

# attach this as a method of the model class (if given)
if model is not None:
setattr(model, f.__name__, f)
# if model is not None:
# setattr(model, f.__name__, f)

return wrapped

Expand Down
20 changes: 12 additions & 8 deletions guidance/models/__init__.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
from ._model import Model, Chat
from .vertexai._vertexai import VertexAI, VertexAIChat, VertexAICompletion, VertexAIInstruct
from ._azure_openai import AzureOpenAI, AzureOpenAIChat, AzureOpenAICompletion, AzureOpenAIInstruct
from ._openai import OpenAI, OpenAIChat, OpenAIInstruct, OpenAICompletion
from ._model import Model, Instruct, Chat

# local models
from .transformers._transformers import Transformers, TransformersChat
from .llama_cpp import LlamaCpp, LlamaCppChat, MistralInstruct, MistralChat
from ._mock import Mock, MockChat
from ._lite_llm import LiteLLMChat, LiteLLMInstruct, LiteLLMCompletion
from ._cohere import CohereCompletion, CohereInstruct
from . import transformers
from ._anthropic import AnthropicChat

# remote models
from ._remote import Remote
from .vertexai._vertexai import VertexAI, VertexAIChat, VertexAICompletion, VertexAIInstruct
from ._azure_openai import AzureOpenAI, AzureOpenAIChat, AzureOpenAICompletion, AzureOpenAIInstruct
from ._openai import OpenAI, OpenAIChat, OpenAIInstruct, OpenAICompletion
from ._lite_llm import LiteLLM, LiteLLMChat, LiteLLMInstruct, LiteLLMCompletion
from ._cohere import Cohere,CohereCompletion, CohereInstruct
from ._anthropic import Anthropic, AnthropicChat
from ._googleai import GoogleAI, GoogleAIChat
8 changes: 8 additions & 0 deletions guidance/models/_anthropic.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,15 @@
from ._remote import Remote

class Anthropic(Remote):
'''Represents an Anthropic model as exposed through their remote API.
Note that because this uses a remote API endpoint without built-in guidance support
there are some things we cannot do, like force the model to follow a pattern inside
a chat role block.
'''
def __init__(self, model, tokenizer=None, echo=True, caching=True, api_base=None, api_key=None, custom_llm_provider=None, temperature=0.0, max_streaming_tokens=1000, **kwargs):
'''Build a new Anthropic model object that represents a model in a given state.'''

try:
from anthropic import Anthropic
except ImportError:
Expand Down
1 change: 1 addition & 0 deletions guidance/models/_azure_openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ def __init__(
version="2023-10-01-preview",
**kwargs,
):
'''Build a new AzureOpenAI model object that represents a model in a given state.'''
if not is_openai or not hasattr(openai_package, "OpenAI"):
raise Exception(
"Please install the openai package version >= 1 using `pip install openai -U` "
Expand Down
1 change: 1 addition & 0 deletions guidance/models/_cohere.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

class Cohere(LiteLLM):
def __init__(self, model, tokenizer=None, echo=True, caching=True, api_base=None, api_key=None, custom_llm_provider=None, temperature=0.0, max_streaming_tokens=1000, **kwargs):
'''Build a new Anthropic model object that represents a model in a given state.'''
try:
import tokenizers
except ImportError:
Expand Down
1 change: 1 addition & 0 deletions guidance/models/_googleai.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@

class GoogleAI(Remote):
def __init__(self, model, tokenizer=None, echo=True, caching=True, api_key=None, organization=None, base_url=None, temperature=0.0, top_p=1.0, max_streaming_tokens=1000, **kwargs):
'''Build a new Anthropic model object that represents a model in a given state.'''
try:
import google.generativeai as genai
except ImportError:
Expand Down
1 change: 1 addition & 0 deletions guidance/models/_lite_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@

class LiteLLM(Remote):
def __init__(self, model, tokenizer=None, echo=True, caching=True, api_base=None, api_key=None, custom_llm_provider=None, temperature=0.0, max_streaming_tokens=1000, **kwargs):
'''Build a new LiteLLM model object that represents a model in a given state.'''
try:
import litellm
except ImportError:
Expand Down
2 changes: 1 addition & 1 deletion guidance/models/_mock.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

class Mock(Model):
def __init__(self, byte_patterns=[], echo=True):

'''Build a new Mock model object that represents a model in a given state.'''
super().__init__(
# our tokens are all bytes and all lowercase letter pairs
[b"<s>"] + [bytes([i,j]) for i in range(ord('a'), ord('z')) for j in range(ord('a'), ord('z'))] + [bytes([i]) for i in range(256)],
Expand Down
90 changes: 46 additions & 44 deletions guidance/models/_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,13 @@
image_pattern = re.compile(r"&lt;\|_image:(.*?)\|&gt;")

class Model:
'''A guidance model object, which represents a sequence model in a given state.
'''The base guidance model object, which represents a sequence model in a given state.
Model objects are immutable representations of model state, so whenever you change
them you get a new model object. However, these copies share the "expensive"
parts of the model like the the parameters and KV-cache, so making copies is cheap.
.. automethod:: __add__
'''

open_blocks = {} # track what context blocks are open
Expand Down Expand Up @@ -310,9 +312,9 @@ def __add__(self, value):

return out

def endswith(self, s):
'''Checks if the current model state ends with the given value.'''
return self._current_prompt().endswith(s)
# def endswith(self, s):
# '''Checks if the current model state ends with the given value.'''
# return self._current_prompt().endswith(s)

def __len__(self):
'''The string length of the current state.
Expand Down Expand Up @@ -393,49 +395,49 @@ def log_prob(self, key, default=None):
# TODO: support calling without a key to get the log prob of the whole model
return self._variables_log_probs.get(key, default)

def get_cache(self):
return self.engine.cache
# def get_cache(self):
# return self.engine.cache

def tool_def(self, functions):

self += """
# Tools
"""
if len(functions) > 0:
self += '''## functions
namespace functions {
'''
for function in functions:
self += f"""// {function['description']}
type {function['name']} = (_: {{"""
for prop_name,prop_data in function["parameters"]["properties"].items():
if "description" in prop_data:
self += f"\n// {prop_data['description']}\n"
self += prop_name
if prop_name not in function["parameters"]["required"]:
self += "?"
self += ": "
if "enum" in prop_data:
for enum in prop_data["enum"]:
self += f'"{enum}"'
if enum != prop_data["enum"][-1]:
self += " | "
else:
self += prop_data["type"]
# def tool_def(self, functions):

# self += """
# # Tools

# """
# if len(functions) > 0:
# self += '''## functions

# namespace functions {

# '''
# for function in functions:
# self += f"""// {function['description']}
# type {function['name']} = (_: {{"""
# for prop_name,prop_data in function["parameters"]["properties"].items():
# if "description" in prop_data:
# self += f"\n// {prop_data['description']}\n"
# self += prop_name
# if prop_name not in function["parameters"]["required"]:
# self += "?"
# self += ": "
# if "enum" in prop_data:
# for enum in prop_data["enum"]:
# self += f'"{enum}"'
# if enum != prop_data["enum"][-1]:
# self += " | "
# else:
# self += prop_data["type"]

if prop_name != list(function["parameters"]["properties"].keys())[-1]:
self += ",\n"
self += """
}) => any;
"""
self[function['name']] = function
self += "} // namespace functions\n"
# if prop_name != list(function["parameters"]["properties"].keys())[-1]:
# self += ",\n"
# self += """
# }) => any;

# """
# self[function['name']] = function
# self += "} // namespace functions\n"

return self
# return self

def _run_stateless(lm, stateless_function, temperature=0.0, top_p=1.0, n=1):
assert Model._grammar_only == 0, "We can't run grammar parsing while in context free mode! (for example inside a block closer)"
Expand Down
Loading

0 comments on commit 8418ea6

Please sign in to comment.