0.8.0
What's Changed
New
-
[feature] add cli to rerun and compare a logged completion with other models by @wenzhe-log10 in #141
log10 completions benchmark_models --help Usage: log10 completions benchmark_models [OPTIONS] Options: --ids TEXT Completion ID --tags TEXT Filter completions by specific tags. Separate multiple tags with commas. --limit TEXT Specify the maximum number of completions to retrieve. --offset TEXT Set the starting point (offset) from where to begin fetching completions. --models TEXT Comma separated list of models to compare --temperature FLOAT Temperature --max_tokens INTEGER Max tokens --top_p FLOAT Top p --analyze_prompt Run prompt analyzer on the messages. -f, --file TEXT Specify the filename for the report in markdown format. --help Show this message and exit.
examples:
- compare using a completion id with models
log10 completions benchmark_models --ids 25572f3c-c2f1-45b0-9de8-d96be4c4e544 --models=gpt-3.5-turbo,mistral-small-latest,claude-3-haiku-20240307
- compare with tags
summ_test
and use 2 completions with modelclaude-3-haiku
. Also we call the analyze_prompt to get suggestions on the prompt, and save everything into a report.md file.
log10 completions benchmark_models --tags summ_test --limit 2 --models=claude-3-haiku-20240307 --analyze_prompt -f report.md
-
add load.log10(lamini) to support lamini sdk and add example by @wenzhe-log10 in #143
import lamini
from log10.load import log10
log10(lamini)
llm = lamini.Lamini("meta-llama/Llama-2-7b-chat-hf")
response = llm.generate("What's 2 + 9 * 3?")
print(response)
- update make logging tests by @wenzhe-log10 in #139
Fixes
- avoid calling async callback in litellm.completion call by @wenzhe-log10 in #135
- fix cli import issue when magentic is not installed by @wenzhe-log10 in #140
- fix prompt analyzer _suggest by @wenzhe-log10 in #142
New Contributors
Full Changelog: 0.7.5...0.8.0