Skip to content

Commit

Permalink
Merge branch 'openai:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
phalgunagopal authored Feb 27, 2024
2 parents c9490d3 + 82ec660 commit f56d298
Show file tree
Hide file tree
Showing 78 changed files with 3,551 additions and 872 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/run_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ on:
pull_request:
branches:
- main
push:
branches:
- main

jobs:
check_files:
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/test_eval.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ jobs:
echo "new_files=$(cat new_files)" >> $GITHUB_ENV
- name: Run oaieval command for each new YAML file
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
files="${{ env.new_files }}"
if [ -n "$files" ]; then
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Evals provide a framework for evaluating large language models (LLMs) or systems built using LLMs. We offer an existing registry of evals to test different dimensions of OpenAI models and the ability to write your own custom evals for use cases you care about. You can also use your data to build private evals which represent the common LLMs patterns in your workflow without exposing any of that data publicly.

If you are building with LLMs, creating high quality evals is one of the most impactful things you can do. Without evals, it can be very difficult and time intensive to understand how different model versions might effect your use case. In the words of [OpenAI's President Greg Brockman](https://twitter.com/gdb/status/1733553161884127435):
If you are building with LLMs, creating high quality evals is one of the most impactful things you can do. Without evals, it can be very difficult and time intensive to understand how different model versions might affect your use case. In the words of [OpenAI's President Greg Brockman](https://twitter.com/gdb/status/1733553161884127435):

<img width="596" alt="https://x.com/gdb/status/1733553161884127435?s=20" src="https://github.com/openai/evals/assets/35577566/ce7840ff-43a8-4d88-bb2f-6b207410333b">

Expand All @@ -14,7 +14,7 @@ To run evals, you will need to set up and specify your [OpenAI API key](https://

### Downloading evals

Our Evals registry is stored using [Git-LFS](https://git-lfs.com/). Once you have downloaded and installed LFS, you can fetch the evals (from within your local copy of the evals repo) with:
Our evals registry is stored using [Git-LFS](https://git-lfs.com/). Once you have downloaded and installed LFS, you can fetch the evals (from within your local copy of the evals repo) with:
```sh
cd evals
git lfs fetch --all
Expand Down Expand Up @@ -57,27 +57,27 @@ If you don't want to contribute new evals, but simply want to run them locally,
pip install evals
```

You can find the full instructions to run existing evals in: [run-evals.md](docs/run-evals.md) and our existing eval templates in: [eval-templates.md](docs/eval-templates.md). For more advanced use cases like prompt chains or tool-using agents, you can use our: [Completion Function Protocol](docs/completion-fns.md).
You can find the full instructions to run existing evals in [`run-evals.md`](docs/run-evals.md) and our existing eval templates in [`eval-templates.md`](docs/eval-templates.md). For more advanced use cases like prompt chains or tool-using agents, you can use our [Completion Function Protocol](docs/completion-fns.md).

We provide the option for you to log your eval results to a Snowflake database, if you have one or wish to set one up. For this option, you will further have to specify the `SNOWFLAKE_ACCOUNT`, `SNOWFLAKE_DATABASE`, `SNOWFLAKE_USERNAME`, and `SNOWFLAKE_PASSWORD` environment variables.

## Writing evals

We suggest getting starting by:

- Walking through the process for building an eval: [build-eval.md](docs/build-eval.md)
- Exploring an example of implementing custom eval logic: [custom-eval.md](docs/custom-eval.md).
- Writing your own completion functions: [completion-fns.md](docs/completion-fns.md)
- Walking through the process for building an eval: [`build-eval.md`](docs/build-eval.md)
- Exploring an example of implementing custom eval logic: [`custom-eval.md`](docs/custom-eval.md)
- Writing your own completion functions: [`completion-fns.md`](docs/completion-fns.md)

Please note that we are currently not accepting Evals with custom code! While we ask you to not submit such evals at the moment, you can still submit modelgraded evals with custom modelgraded YAML files.
Please note that we are currently not accepting evals with custom code! While we ask you to not submit such evals at the moment, you can still submit model-graded evals with custom model-graded YAML files.

If you think you have an interesting eval, please open a pull request with your contribution. OpenAI staff actively review these evals when considering improvements to upcoming models.

## FAQ

Do you have any examples of how to build an eval from start to finish?

- Yes! These are in the `examples` folder. We recommend that you also read through [build-eval.md](docs/build-eval.md) in order to gain a deeper understanding of what is happening in these examples.
- Yes! These are in the `examples` folder. We recommend that you also read through [`build-eval.md`](docs/build-eval.md) in order to gain a deeper understanding of what is happening in these examples.

Do you have any examples of evals implemented in multiple different ways?

Expand All @@ -95,4 +95,4 @@ I am a world-class prompt engineer. I choose not to code. How can I contribute m

## Disclaimer

By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies: https://platform.openai.com/docs/usage-policies.
By contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI evals will be subject to our usual Usage Policies: https://platform.openai.com/docs/usage-policies.
53 changes: 26 additions & 27 deletions evals/cli/oaieval.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@
import sys
from typing import Any, Mapping, Optional, Union, cast

import openai

import evals
import evals.api
import evals.base
Expand Down Expand Up @@ -135,13 +133,37 @@ def run(args: OaiEvalArguments, registry: Optional[Registry] = None) -> str:
eval_spec is not None
), f"Eval {args.eval} not found. Available: {list(sorted(registry._evals.keys()))}"

def parse_extra_eval_params(
param_str: Optional[str],
) -> Mapping[str, Union[str, int, float]]:
"""Parse a string of the form "key1=value1,key2=value2" into a dict."""
if not param_str:
return {}

def to_number(x: str) -> Union[int, float, str]:
try:
return int(x)
except (ValueError, TypeError):
pass
try:
return float(x)
except (ValueError, TypeError):
pass
return x

str_dict = dict(kv.split("=") for kv in param_str.split(","))
return {k: to_number(v) for k, v in str_dict.items()}

extra_eval_params = parse_extra_eval_params(args.extra_eval_params)
eval_spec.args.update(extra_eval_params)

# If the user provided an argument to --completion_args, parse it into a dict here, to be passed to the completion_fn creation **kwargs
completion_args = args.completion_args.split(",")
additonal_completion_args = {k: v for k, v in (kv.split("=") for kv in completion_args if kv)}
additional_completion_args = {k: v for k, v in (kv.split("=") for kv in completion_args if kv)}

completion_fns = args.completion_fn.split(",")
completion_fn_instances = [
registry.make_completion_fn(url, **additonal_completion_args) for url in completion_fns
registry.make_completion_fn(url, **additional_completion_args) for url in completion_fns
]

run_config = {
Expand Down Expand Up @@ -188,29 +210,6 @@ def run(args: OaiEvalArguments, registry: Optional[Registry] = None) -> str:
run_url = f"{run_spec.run_id}"
logger.info(_purple(f"Run started: {run_url}"))

def parse_extra_eval_params(
param_str: Optional[str],
) -> Mapping[str, Union[str, int, float]]:
"""Parse a string of the form "key1=value1,key2=value2" into a dict."""
if not param_str:
return {}

def to_number(x: str) -> Union[int, float, str]:
try:
return int(x)
except (ValueError, TypeError):
pass
try:
return float(x)
except (ValueError, TypeError):
pass
return x

str_dict = dict(kv.split("=") for kv in param_str.split(","))
return {k: to_number(v) for k, v in str_dict.items()}

extra_eval_params = parse_extra_eval_params(args.extra_eval_params)

eval_class = registry.get_class(eval_spec)
eval: Eval = eval_class(
completion_fns=completion_fn_instances,
Expand Down
5 changes: 3 additions & 2 deletions evals/cli/oaievalset.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ def get_parser() -> argparse.ArgumentParser:
class OaiEvalSetArguments(argparse.Namespace):
model: str
eval_set: str
registry_path: Optional[str]
registry_path: Optional[list[str]]
resume: bool
exit_on_error: bool

Expand All @@ -94,8 +94,9 @@ def run(
for index, eval in enumerate(registry.get_evals(eval_set.evals)):
if not eval or not eval.key:
logger.debug("The eval #%d in eval_set is not valid", index)
continue

command = [run_command, args.model, eval.key] + unknown_args
command: list[str] = [run_command, args.model, eval.key] + unknown_args
if args.registry_path:
command.append("--registry_path")
command = command + args.registry_path
Expand Down
8 changes: 4 additions & 4 deletions evals/completion_fns/langchain_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,16 +66,16 @@ def _convert_dict_to_langchain_message(_dict) -> BaseMessage:


class LangChainChatModelCompletionFn(CompletionFn):
def __init__(self, llm: str, llm_kwargs: Optional[dict] = None, **kwargs) -> None:
def __init__(self, llm: str, chat_model_kwargs: Optional[dict] = None, **kwargs) -> None:
# Import and resolve self.llm to an instance of llm argument here,
# assuming it's always a subclass of BaseLLM
if llm_kwargs is None:
llm_kwargs = {}
if chat_model_kwargs is None:
chat_model_kwargs = {}
module = importlib.import_module("langchain.chat_models")
LLMClass = getattr(module, llm)

if issubclass(LLMClass, BaseChatModel):
self.llm = LLMClass(**llm_kwargs)
self.llm = LLMClass(**chat_model_kwargs)
else:
raise ValueError(f"{llm} is not a subclass of BaseChatModel")

Expand Down
14 changes: 10 additions & 4 deletions evals/completion_fns/retrieval.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,16 @@
from typing import Any, Optional, Union

import numpy as np
from openai import OpenAI

client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
import pandas as pd
from openai import OpenAI

from evals.api import CompletionFn, CompletionResult
from evals.prompt.base import ChatCompletionPrompt, CompletionPrompt
from evals.record import record_sampling
from evals.registry import Registry

client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))


def load_embeddings(embeddings_and_text_path: str):
df = pd.read_csv(embeddings_and_text_path, converters={"embedding": literal_eval})
Expand Down Expand Up @@ -95,7 +95,13 @@ def __call__(self, prompt: Union[str, list[dict]], **kwargs: Any) -> RetrievalCo
kwargs: Additional arguments to pass to the completion function call method.
"""
# Embed the prompt
embedded_prompt = client.embeddings.create(model=self.embedding_model, input=CompletionPrompt(prompt).to_formatted_prompt()).data[0].embedding
embedded_prompt = (
client.embeddings.create(
model=self.embedding_model, input=CompletionPrompt(prompt).to_formatted_prompt()
)
.data[0]
.embedding
)

embs = self.embeddings_df["embedding"].to_list()

Expand Down
8 changes: 6 additions & 2 deletions evals/data.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,12 +44,16 @@ def zstd_open(filename: str, mode: str = "rb", openhook: Any = open) -> pyzstd.Z
return pyzstd.ZstdFile(openhook(filename, mode), mode=mode)


def open_by_file_pattern(filename: str, mode: str = "r", **kwargs: Any) -> Any:
def open_by_file_pattern(filename: Union[str, Path], mode: str = "r", **kwargs: Any) -> Any:
"""Can read/write to files on gcs/local with or without gzipping. If file
is stored on gcs, streams with blobfile. Otherwise use vanilla python open. If
filename endswith gz, then zip/unzip contents on the fly (note that gcs paths and
gzip are compatible)"""
open_fn = partial(bf.BlobFile, **kwargs)

if isinstance(filename, Path):
filename = filename.as_posix()

try:
if filename.endswith(".gz"):
return gzip_open(filename, openhook=open_fn, mode=mode)
Expand Down Expand Up @@ -188,7 +192,7 @@ def _to_py_types(o: Any, exclude_keys: List[Text]) -> Any:
if isinstance(o, pydantic.BaseModel):
return {
k: _to_py_types(v, exclude_keys=exclude_keys)
for k, v in json.loads(o.json()).items()
for k, v in json.loads(o.model_dump_json()).items()
if k not in exclude_keys
}

Expand Down
34 changes: 25 additions & 9 deletions evals/data_test.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import ast
import dataclasses
from typing import Optional, Text

from pydantic import BaseModel
from typing import Text, Optional

from evals.data import jsondumps


Expand All @@ -17,11 +17,27 @@ class MyDataClass:
last_name: Text
sub_class: Optional[MyPydanticClass] = None


def test_jsondumps():
assert "{\"first_name\": \"a\", \"last_name\": \"b\", \"sub_class\": null}" == jsondumps(MyDataClass(first_name="a", last_name="b"))
assert "{\"first_name\": \"a\", \"sub_class\": null}" == jsondumps(MyDataClass(first_name="a", last_name="b"), exclude_keys=["last_name"])
assert "{\"first_name\": \"a\", \"last_name\": \"b\"}" == jsondumps(MyPydanticClass(first_name="a", last_name="b"))
assert "{\"first_name\": \"a\"}" == jsondumps(MyPydanticClass(first_name="a", last_name="b"), exclude_keys=["last_name"])
assert "{\"first_name\": \"a\", \"last_name\": \"b\"}" == jsondumps({"first_name": "a", "last_name": "b"})
assert "{\"first_name\": \"a\"}" == jsondumps({"first_name": "a", "last_name": "b"}, exclude_keys=["last_name"])
assert "{\"first_name\": \"a\", \"sub_class\": {\"first_name\": \"a\"}}" == jsondumps(MyDataClass("a", "b", MyPydanticClass(first_name="a", last_name="b")), exclude_keys=["last_name"])
assert '{"first_name": "a", "last_name": "b", "sub_class": null}' == jsondumps(
MyDataClass(first_name="a", last_name="b")
)
assert '{"first_name": "a", "sub_class": null}' == jsondumps(
MyDataClass(first_name="a", last_name="b"), exclude_keys=["last_name"]
)
assert '{"first_name": "a", "last_name": "b"}' == jsondumps(
MyPydanticClass(first_name="a", last_name="b")
)
assert '{"first_name": "a"}' == jsondumps(
MyPydanticClass(first_name="a", last_name="b"), exclude_keys=["last_name"]
)
assert '{"first_name": "a", "last_name": "b"}' == jsondumps(
{"first_name": "a", "last_name": "b"}
)
assert '{"first_name": "a"}' == jsondumps(
{"first_name": "a", "last_name": "b"}, exclude_keys=["last_name"]
)
assert '{"first_name": "a", "sub_class": {"first_name": "a"}}' == jsondumps(
MyDataClass("a", "b", MyPydanticClass(first_name="a", last_name="b")),
exclude_keys=["last_name"],
)
2 changes: 1 addition & 1 deletion evals/elsuite/ballots/eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ def query(
messages.append({"role": "user", "content": response})
response = query(influencer_prompt, fn=self.influencer_fn)
messages.append({"role": "assistant", "content": response})
messages.append({"role": "assistant", "content": make_decision_prompt})
messages.append({"role": "system", "content": make_decision_prompt})
response = query(
voter_prompt,
reversed_roles=True,
Expand Down
Loading

0 comments on commit f56d298

Please sign in to comment.