Skip to content

0.9.0

Compare
Choose a tag to compare
@wenzhe-log10 wenzhe-log10 released this 07 Jun 17:04
· 60 commits to main since this release
7b56d11

What's Changed

New

  • Add fetching autofeedback by completion id to cli by @kxtran in #175

    To get auto generated feedback for a completion, use log10 feedback autofeedback get

  • Use non blocking async for AsyncOpenAI and AsyncAnthropic by @wenzhe-log10 in #179

    Release 0.9.0 includes significant improvements in how we handle concurrency while using LLM in asynchronous streaming mode.
    This update is designed to ensure that logging at steady state incurs no overhead (previously up to 1-2 seconds), providing a smoother and more efficient experience in latency critical settings.

    Important Considerations for Short-Lived Scripts:

    💡For short-lived scripts using asynchronous streaming, it's important to note that you may need to wait until all logging requests have been completed before terminating your script.
    We have provided a convenient method called finalize() to handle this.
    Here's how you can implement this in your code:

    from log10._httpx_utils import finalize
    
    ...
    
    await finalize()

    Ensure finalize() is called once, at the very end of your event loop to guarantee that all pending logging requests are processed before the script exits.
    For more details, check async logging examples.

Chores

Full Changelog: 0.8.6...0.9.0