Skip to content

Releases: log10-io/log10

0.9.2

17 Jun 22:48
4c19150
Compare
Choose a tag to compare

What's Changed

  • ENG-851 Update google-generativeai version, example and test by @kxtran in #185
  • Update openai version to 1.33.0 by @kxtran in #186
  • Disable warning for async openai embedding calls by @wenzhe-log10 in #189
  • [ENG-856] Remove large file from repo by @nqn in #187
  • [ENG-857] Filter large image messages by @nqn in #190

Full Changelog: 0.9.1...0.9.2

0.9.1

11 Jun 19:25
@nqn nqn
Compare
Choose a tag to compare

What's Changed

  • [ENG-850] Large files break logger by @nqn in #184

Full Changelog: 0.9.0...0.9.1

0.9.0

07 Jun 17:04
7b56d11
Compare
Choose a tag to compare

What's Changed

New

  • Add fetching autofeedback by completion id to cli by @kxtran in #175

    To get auto generated feedback for a completion, use log10 feedback autofeedback get

  • Use non blocking async for AsyncOpenAI and AsyncAnthropic by @wenzhe-log10 in #179

    Release 0.9.0 includes significant improvements in how we handle concurrency while using LLM in asynchronous streaming mode.
    This update is designed to ensure that logging at steady state incurs no overhead (previously up to 1-2 seconds), providing a smoother and more efficient experience in latency critical settings.

    Important Considerations for Short-Lived Scripts:

    💡For short-lived scripts using asynchronous streaming, it's important to note that you may need to wait until all logging requests have been completed before terminating your script.
    We have provided a convenient method called finalize() to handle this.
    Here's how you can implement this in your code:

    from log10._httpx_utils import finalize
    
    ...
    
    await finalize()

    Ensure finalize() is called once, at the very end of your event loop to guarantee that all pending logging requests are processed before the script exits.
    For more details, check async logging examples.

Chores

Full Changelog: 0.8.6...0.9.0

0.8.6

29 May 17:54
5e43358
Compare
Choose a tag to compare

What's Changed

Bug fixes

  • Update parsing async openai streaming response logic by @kxtran in #167
    • Installing latest magentic version pulls openai version >= 1.26.0 which includes usage block and an empty array list of choices for streaming response which caused the parsing logic to run into exception. So the fix is to handle the new streaming responses and improve the code quality to be more robust.

Chores

Full Changelog: 0.8.5...0.8.6

0.8.5

28 May 20:33
bc72c3e
Compare
Choose a tag to compare

What's Changed

New

  • Add gpt-4o in cli benchmark_models by @wenzhe-log10 in #159
  • ENG-724: Add a function in load.py to return the last_completion_id by @nqn in #165
  • ENG-784 Add anthropic async and tools stream api support by @kxtran in #162

Bug fixes

  • Session bug (fix with context variables) by @nqn in #161

Chores

Full Changelog: 0.8.4...0.8.5

0.8.4

29 Apr 19:07
4ef086a
Compare
Choose a tag to compare

What's Changed

  • ENG-615: Make feedback task name required & add optional completion tag selectors to feedback task creation by @nullfox in #153

New Contributors

Full Changelog: 0.8.3...0.8.4

0.8.3

26 Apr 00:11
901925c
Compare
Choose a tag to compare

What's Changed

Bug fixes

  • fix condition when finish_reason is stop for tool_calls by @wenzhe-log10 in #152
  • strip not given kwargs for openai sync calls if openai.NOT_GIVEN is assigned to kwargs by @wenzhe-log10 in #154

Chores

Full Changelog: 0.8.2...0.8.3

0.8.2

24 Apr 22:32
61e4561
Compare
Choose a tag to compare

What's Changed

New

  • support google.generativeai sdk ChatSession.send_message and add examples by @wenzhe-log10 in #148
    import google.generativeai as genai
    
    from log10.load import log10
    
    
    log10(genai)
    
    
    model = genai.GenerativeModel("gemini-1.5-pro-latest", system_instruction="You are a cat. Your name is Neko.")
    chat = model.start_chat(
        history=[
            {"role": "user", "parts": [{"text": "please say yes."}]},
            {"role": "model", "parts": [{"text": "Yes yes yes?"}]},
        ]
    )
    
    prompt = "please say no."
    response = chat.send_message(prompt)
    
    print(response.text)
    
  • README update - add model comparison using CLI by @wenzhe-log10 in #149

Fix

  • Move cli utils func in its own file by @kxtran in #150
  • fix sync stream for both openai tool_calls and magentic funciton calls by @wenzhe-log10 in #136

Full Changelog: 0.8.1...0.8.2

Patch release for streaming bug

19 Apr 15:31
@nqn nqn
7324868
Compare
Choose a tag to compare

What's Changed

  • ENG-605: Content type bug by @nqn in #146
  • Patch release for streaming bug by @nqn in #147

Full Changelog: 0.8.0...0.8.1

0.8.0

18 Apr 22:14
f97d912
Compare
Choose a tag to compare

What's Changed

New

  • Additional documentation for Feedback by @delip in #129

  • [feature] add cli to rerun and compare a logged completion with other models by @wenzhe-log10 in #141

    log10 completions benchmark_models --help
    Usage: log10 completions benchmark_models [OPTIONS]
    
    Options:
      --ids TEXT            Completion ID
      --tags TEXT           Filter completions by specific tags. Separate multiple
                            tags with commas.
      --limit TEXT          Specify the maximum number of completions to retrieve.
      --offset TEXT         Set the starting point (offset) from where to begin
                            fetching completions.
      --models TEXT         Comma separated list of models to compare
      --temperature FLOAT   Temperature
      --max_tokens INTEGER  Max tokens
      --top_p FLOAT         Top p
      --analyze_prompt      Run prompt analyzer on the messages.
      -f, --file TEXT       Specify the filename for the report in markdown
                            format.
      --help                Show this message and exit.
    

    examples:

    • compare using a completion id with models
    log10 completions benchmark_models --ids 25572f3c-c2f1-45b0-9de8-d96be4c4e544 --models=gpt-3.5-turbo,mistral-small-latest,claude-3-haiku-20240307
    
    • compare with tags summ_test and use 2 completions with model claude-3-haiku. Also we call the analyze_prompt to get suggestions on the prompt, and save everything into a report.md file.
    log10 completions benchmark_models --tags summ_test --limit 2 --models=claude-3-haiku-20240307 --analyze_prompt -f report.md
    
  • add load.log10(lamini) to support lamini sdk and add example by @wenzhe-log10 in #143

import lamini

from log10.load import log10


log10(lamini)

llm = lamini.Lamini("meta-llama/Llama-2-7b-chat-hf")
response = llm.generate("What's 2 + 9 * 3?")

print(response)

Fixes

New Contributors

Full Changelog: 0.7.5...0.8.0