Releases: log10-io/log10
0.9.2
What's Changed
- ENG-851 Update google-generativeai version, example and test by @kxtran in #185
- Update openai version to 1.33.0 by @kxtran in #186
- Disable warning for async openai embedding calls by @wenzhe-log10 in #189
- [ENG-856] Remove large file from repo by @nqn in #187
- [ENG-857] Filter large image messages by @nqn in #190
Full Changelog: 0.9.1...0.9.2
0.9.1
0.9.0
What's Changed
New
-
Add fetching autofeedback by completion id to cli by @kxtran in #175
To get auto generated feedback for a completion, use
log10 feedback autofeedback get
-
Use non blocking async for AsyncOpenAI and AsyncAnthropic by @wenzhe-log10 in #179
Release
0.9.0
includes significant improvements in how we handle concurrency while using LLM in asynchronous streaming mode.
This update is designed to ensure that logging at steady state incurs no overhead (previously up to 1-2 seconds), providing a smoother and more efficient experience in latency critical settings.Important Considerations for Short-Lived Scripts:
💡For short-lived scripts using asynchronous streaming, it's important to note that you may need to wait until all logging requests have been completed before terminating your script.
We have provided a convenient method calledfinalize()
to handle this.
Here's how you can implement this in your code:from log10._httpx_utils import finalize ... await finalize()
Ensure
finalize()
is called once, at the very end of your event loop to guarantee that all pending logging requests are processed before the script exits.
For more details, check async logging examples.
Chores
- Add dependabot workflow by @kxtran in #169
- Remove setup.py file by @kxtran in #174
- Verify generated completions submitted to the platform by @kxtran in #172
Full Changelog: 0.8.6...0.9.0
0.8.6
What's Changed
Bug fixes
- Update parsing async openai streaming response logic by @kxtran in #167
- Installing latest magentic version pulls openai version >= 1.26.0 which includes
usage
block and an empty array list ofchoices
for streaming response which caused the parsing logic to run into exception. So the fix is to handle the new streaming responses and improve the code quality to be more robust.
- Installing latest magentic version pulls openai version >= 1.26.0 which includes
Chores
Full Changelog: 0.8.5...0.8.6
0.8.5
What's Changed
New
- Add gpt-4o in cli benchmark_models by @wenzhe-log10 in #159
- ENG-724: Add a function in load.py to return the last_completion_id by @nqn in #165
- ENG-784 Add anthropic async and tools stream api support by @kxtran in #162
Bug fixes
Chores
- Update examples and dependencies by @wenzhe-log10 in #157
- Add a cronjob tests via github actions by @kxtran in #158
- Update langchain test assertion by @kxtran in #160
Full Changelog: 0.8.4...0.8.5
0.8.4
0.8.3
What's Changed
Bug fixes
- fix condition when finish_reason is stop for tool_calls by @wenzhe-log10 in #152
- strip not given kwargs for openai sync calls if openai.NOT_GIVEN is assigned to kwargs by @wenzhe-log10 in #154
Chores
- minor update in makefile for logging test by @wenzhe-log10 in #155
Full Changelog: 0.8.2...0.8.3
0.8.2
What's Changed
New
- support google.generativeai sdk ChatSession.send_message and add examples by @wenzhe-log10 in #148
import google.generativeai as genai from log10.load import log10 log10(genai) model = genai.GenerativeModel("gemini-1.5-pro-latest", system_instruction="You are a cat. Your name is Neko.") chat = model.start_chat( history=[ {"role": "user", "parts": [{"text": "please say yes."}]}, {"role": "model", "parts": [{"text": "Yes yes yes?"}]}, ] ) prompt = "please say no." response = chat.send_message(prompt) print(response.text)
- README update - add model comparison using CLI by @wenzhe-log10 in #149
Fix
- Move cli utils func in its own file by @kxtran in #150
- fix sync stream for both openai tool_calls and magentic funciton calls by @wenzhe-log10 in #136
Full Changelog: 0.8.1...0.8.2
Patch release for streaming bug
What's Changed
Full Changelog: 0.8.0...0.8.1
0.8.0
What's Changed
New
-
[feature] add cli to rerun and compare a logged completion with other models by @wenzhe-log10 in #141
log10 completions benchmark_models --help Usage: log10 completions benchmark_models [OPTIONS] Options: --ids TEXT Completion ID --tags TEXT Filter completions by specific tags. Separate multiple tags with commas. --limit TEXT Specify the maximum number of completions to retrieve. --offset TEXT Set the starting point (offset) from where to begin fetching completions. --models TEXT Comma separated list of models to compare --temperature FLOAT Temperature --max_tokens INTEGER Max tokens --top_p FLOAT Top p --analyze_prompt Run prompt analyzer on the messages. -f, --file TEXT Specify the filename for the report in markdown format. --help Show this message and exit.
examples:
- compare using a completion id with models
log10 completions benchmark_models --ids 25572f3c-c2f1-45b0-9de8-d96be4c4e544 --models=gpt-3.5-turbo,mistral-small-latest,claude-3-haiku-20240307
- compare with tags
summ_test
and use 2 completions with modelclaude-3-haiku
. Also we call the analyze_prompt to get suggestions on the prompt, and save everything into a report.md file.
log10 completions benchmark_models --tags summ_test --limit 2 --models=claude-3-haiku-20240307 --analyze_prompt -f report.md
-
add load.log10(lamini) to support lamini sdk and add example by @wenzhe-log10 in #143
import lamini
from log10.load import log10
log10(lamini)
llm = lamini.Lamini("meta-llama/Llama-2-7b-chat-hf")
response = llm.generate("What's 2 + 9 * 3?")
print(response)
- update make logging tests by @wenzhe-log10 in #139
Fixes
- avoid calling async callback in litellm.completion call by @wenzhe-log10 in #135
- fix cli import issue when magentic is not installed by @wenzhe-log10 in #140
- fix prompt analyzer _suggest by @wenzhe-log10 in #142
New Contributors
Full Changelog: 0.7.5...0.8.0