Releases: letta-ai/letta
0.2.11
MemGPT Python Client
MemGPT version 0.2.11
includes a new Python client for developers to easily build on MemGPT (special thanks to @BabellDev!)
To use the MemGPT Python client, simply do:
from memgpt import MemGPT
# creates a client object, which you can then use to create new MemGPT agents, message agents, etc
client = MemGPT()
For more information, check our documentation page.
✍️ What's Changed
- ci: Run tests using postgres docker container by @sarahwooders in #715
- fix: increase the func return char limit by @cpacker in #714
- fix: patch TEI error in load by @cpacker in #725
- fix: patch bug on TEI embedding lookup by @cpacker in #724
- fix: updated CLI interface to properly print searches on archival memory by @cpacker in #731
- fix: Typo in info log message and docs by @VladCuciureanu in #730
- fix: don't insert request heartbeat into pause heartbeat by @cpacker in #727
- docs: synced api reference by @cpacker in #737
- fix: cleanup failed agent creation by @cpacker in #726
- feat: chatml-noforce-roles wrapper + cli fix by @cpacker in #738
- fix: refactor + improve json parser by @cpacker in #739
- feat: Add MemGPT "Python Client" by @BabellDev in #713
- feat: enum choices for list command argument (issue #732) by @jimlloyd in #746
- docs: Include steps for Local LLMs by @sanegaming in #749
- docs: word choice in documentation by @oceaster in #760
- docs: Improve Local LLM information and add WSL Troubleshooting by @sanegaming in #752
- docs: linting, syntax, formatting & spelling fixes for all files by @oceaster in #761
- fix: fix string & ws rules in json_func_calls...gbnf by @jimlloyd in #754
- docs: update local_llm_settings.md by @cpacker in #765
- docs: Update python_client.md by @vinayak-revelation in #772
- fix: Update memgpt_coder_autogen.ipynb by @cpacker in #775
Full Changelog: 0.2.10...0.2.11
👋 New Contributors
- @VladCuciureanu made their first contribution in #730
- @BabellDev made their first contribution in #713
- @jimlloyd made their first contribution in #746
- @sanegaming made their first contribution in #749
- @oceaster made their first contribution in #760
- @vinayak-revelation made their first contribution in #772
0.2.10
Merry Christmas! 🎄🎁🎅
MemGPT version 0.2.10
includes:
- Improvements to local/open LLM performance
- Includes two new model wrappers that increase MemGPT "proactiveness" (when using local/open LLMs)
chatml-hints
andchatml-noforce-hints
- Use them by specifying them in
memgpt configure
or adding them tomemgpt run
- eg
memgpt run --model-wrapper chatml-noforce-hints
- eg
- Includes two new model wrappers that increase MemGPT "proactiveness" (when using local/open LLMs)
- Better visuals in the MemGPT CLI UI
- Various patches (
quickstart
command, AutoGen, ...)
✍️ What's Changed
- docs: Added a new docs page describing how to run custom LLM parameters by @cpacker in #688
- feat: improve CLI appearance by @cpacker in #687
- fix: moved configs for hosted to https, patched bug in embedding creation by @cpacker in #685
- fix: allow edge case of quickstart before run on first install by @cpacker in #684
- fix: Remove match/case to support python <3.10 by @cpacker in #691
- fix: typo in Dockerfile comment by @tombedor in #690
- fix: memgpt agent ignores user messages by @javiersastre in #679
- fix: Better errors on over length persona/human files by @cpacker in #695
- feat: set a default temperature in the common local llm settings by @cpacker in #696
- feat: added basic heartbeat override heuristics by @cpacker in #621
- docs: updated readme for quickstart by @cpacker in #698
- fix: misc fixes by @cpacker in #700
- feat: added new 'hint' wrappers that inject hints into the pre-prefix by @cpacker in #707
Full Changelog: 0.2.9...0.2.10
👋 New Contributors
- @tombedor made their first contribution in #690
- @javiersastre made their first contribution in #679
0.2.9
🐛 Bugfix release to patch issues with memgpt quickstart
command
See https://github.com/cpacker/MemGPT/releases/tag/0.2.8 for release details.
Full Changelog: 0.2.8...0.2.9
0.2.8
This release includes major updates to help it get easier to get started with MemGPT!
Note: release 0.2.8 superseded by bugfix release 0.2.9
🎄 Free MemGPT Hosted Endpoints
MemGPT now can be used with hosted LLM and embedding endpoints, which are free and do not require an access key! The LLM endpoint is running a variant of the newly released Mixtral model - specifically Dolphin 2.5 Mixtral 8x7b 🐬!
Since the endpoint is still in beta, please expect occasional downtime. You can check for uptime at https://status.memgpt.ai.
⚡ Quickstart Configuration
You can automatically configure MemGPT (for the MemGPT endpoints and OpenAI) with quickstart commands:
# using MemGPT free endpoint
> memgpt quickstart --latest
# using OpenAI endpoint
> memgpt quickstart --latest --backend openai
This will set default options in the file ~/.memgpt/config
which you can also modify with advanced options in memgpt configure
.
📖 Documentation Updates
MemGPT's documentation has migrated to https://memgpt.readme.io.
✍️ Full Change Log
- API server refactor + REST API by @cpacker in #593
- added
memgpt server
command by @cpacker in #611 - updated local APIs to return usage info by @cpacker in #585
- added autogen as an extra by @cpacker in #616
- Add safeguard on tokens returned by functions by @cpacker in #576
- patch bug where
function_args.copy()
throws runtime error by @cpacker in #617 - allow passing custom host to rest server by @cpacker in #618
- migrate to using completions endpoint by default by @cpacker in #628
- Patch bug with loading of old agents by @cpacker in #629
- fix: poetry add [html2text/docx2txt] by @cpacker in #633
- feat: Add semantic PR checking to enforce prefixes on PRs by @cpacker in #634
- feat: added memgpt folder command by @cpacker in #632
- feat: Add common + custom settings files for completion endpoints by @cpacker in #631
- feat: Migrate docs by @cpacker in #646
- feat: Updated contributing docs by @cpacker in #653
- fix: [446] better gitignore for IDEs and OS. by @agiletechnologist in #651
- feat: updated/added docs assets by @cpacker in #654
- feat: Add
memgpt quickstart
command by @cpacker in #641 - fix: patch ollama bug w/ raw mode by @cpacker in #663
- fix: Patch openai error message + openai quickstart by @cpacker in #665
- fix: added logging of raw response on debug by @cpacker in #666
- feat: added /summarize command by @cpacker in #667
- feat: Add new wrapper defaults by @cpacker in #656
- fix: Throw "env vars not set" early and enhance /attach for KeyboardInterrupt (#669) by @dejardim in #674
- fix: CLI conveniences (add-on to #674) by @cpacker in #675
- feat: pull model list for openai-compatible endpoints by @cpacker in #630
- fix: Update README.md by @cpacker in #676
- docs: patched asset links by @cpacker in #677
- feat: further simplify setup flow by @cpacker in #673
👋 New Contributors
Full Changelog: 0.2.7...0.2.8
0.2.7
Minor bugfix release
What's Changed
- allow passing
skip_verify
to autogen constructors by @cpacker in #581 - Chroma storage integration by @sarahwooders in #285
- Fix
pyproject.toml
chroma version by @sarahwooders in #582 - Remove broken tests from chroma merge by @sarahwooders in #584
- patch load_save test by @cpacker in #586
- Patch azure embeddings + handle azure deployments properly by @cpacker in #594
- AutoGen misc fixes by @cpacker in #603
- Add
lancedb
andchroma
into default package dependencies by @sarahwooders in #605 - Bump version 0.2.7 by @sarahwooders in #607
Full Changelog: 0.2.6...0.2.7
0.2.6
Bugfix release
What's Changed
- Add docs file for customizing embedding mode by @sarahwooders in #554
- Upgrade to
llama_index=0.9.10
by @sarahwooders in #556 - fix cannot import name 'EmptyIndex' from 'llama_index' by @cpacker in #558
- Fix typo in storage.md by @alxpez in #564
- use a consistent warning prefix across codebase by @cpacker in #569
- Update autogen.md to include Azure config example + patch for
pyautogen>=0.2.0
by @cpacker in #555 - Update autogen.md by @cpacker in #571
- Fix crash from bad key access into response_message by @claucambra in #437
- sort agents by directory-last-modified time by @cpacker in #574
- Add safety check to pop by @cpacker in #575
- Add
pyyaml
package topyproject.toml
by @cpacker in #557 - add back dotdict for backcompat by @cpacker in #572
- Bump version to 0.2.6 by @sarahwooders in #573
New Contributors
Full Changelog: 0.2.5...0.2.6
0.2.5
This release includes a number of bugfixes and new integrations:
- Bugfixes for AutoGen integration (including a common OpenAI dependency conflict issue)
- Documentations for how to use MemGPT with vLLM OpenAI compatible endpoints
- Integration with HuggingFace TEI for custom embedding models
This release also fully deprecates and removes legacy commands and configuration options which were no longer being maintained:
python main.py
command (replaced bymemgpt run
)- Usage of
BACKEND_TYPE
andOPENAI_BASE_URL
to configure local/custom LLMs (replaced bymemgpt configure
andmemgpt run
flags)
What's Changed
- add new manual json parser meant to catch send_message calls with trailing bad extra chars by @cpacker in #509
- add a longer prefix that to the default wrapper by @cpacker in #510
- add core memory char limits to text shown in core memory by @cpacker in #508
- [hotfix] extra arg being passed causing a runtime error by @cpacker in #517
- Add warning if no data sources loaded on
/attach
command by @sarahwooders in #513 - fix doc typo autogem to autogen by @paulasquin in #512
- Update contributing guidelines by @sarahwooders in #516
- Update contributing.md by @cpacker in #518
- Update contributing.md by @cpacker in #520
- Add support for HuggingFace Text Embedding Inference endpoint for embeddings by @sarahwooders in #524
- Update mkdocs theme, small fixes for
mkdocs.yml
by @cpacker in #522 - Update mkdocs.yml by @cpacker in #525
- Clean memory error messages by @cpacker in #523
- Fix class names used in persistence manager logging by @claucambra in #503
- Specify pyautogen dependency by adding install extra for autogen by @sarahwooders in #530
- Add
user
field for vLLM endpoint by @sarahwooders in #531 - Patch JSON parsing code (regex fallback) by @cpacker in #533
- Update bug_report.md by @cpacker in #532
- LanceDB integration bug fixes and improvements by @AyushExel in #528
- Remove
openai
package by @cpacker in #534 - Update contributing.md (typo) by @cpacker in #538
- Run formatting checks with poetry by @sarahwooders in #537
- Removing dead code + legacy commands by @sarahwooders in #536
- Remove usage of
BACKEND_TYPE
by @sarahwooders in #539 - Update AutoGen documentation and notebook example by @cpacker in #540
- Update local_llm.md by @cpacker in #542
- Documentation update by @cpacker in #541
- clean docs by @cpacker in #543
- Update autogen.md by @cpacker in #544
- update docs by @cpacker in #547
- added vLLM doc page since we support it by @cpacker in #545
New Contributors
- @paulasquin made their first contribution in #512
- @claucambra made their first contribution in #503
- @AyushExel made their first contribution in #528
Full Changelog: 0.2.4...0.2.5
0.2.4
This release includes bugfixes (including major bugfixes for autogen) and a number of new features:
- Custom presets, which allow customization of the set of function calls MemGPT can make
- Integration with LanceDB for archival storage contributed by @PrashantDixit0
- Integration with vLLM OpenAI compatible endpoints
What's Changed
- Set service context for llama index in
local.py
by @sarahwooders in #462 - Update functions.md by @cpacker in #461
- Fix linking functions from
~/.memgpt/functions
by @cpacker in #463 - Add d20 function example to readthedocs by @cpacker in #464
- Move
webui
backend to new openai completions endpoint by @cpacker in #468 - updated websocket protocol and server by @cpacker in #473
- Lancedb by @PrashantDixit0 in #455
- Docs: Fix typos by @sahusiddharth in #477
- Remove .DS_Store from agents list by @cpacker in #485
- Fix #487 (summarize call uses OpenAI even with local LLM config) by @cpacker in #488
- patch web UI by @cpacker in #484
- ANNA, an acronym for Adaptive Neural Network Assistant. personal research assistant by @agiletechnologist in #494
- vLLM support by @cpacker in #492
- Add error handling during linking imports by @cpacker in #495
- Fixes bugs with AutoGen implementation and exampes by @cpacker in #498
- [version] bump version to 0.2.4 by @sarahwooders in #497
New Contributors
- @PrashantDixit0 made their first contribution in #455
- @sahusiddharth made their first contribution in #477
- @agiletechnologist made their first contribution in #494
Full Changelog: 0.2.3...0.2.4
0.2.3
Updates
- Updated MemGPT and Agent Configs: This release makes changes to how MemGPT and agent configurations are stored. These changes will help MemGPT keep track of what settings and with what version an agent was saved with, to help improve cross-version compatibility for agents.
- If you've been using a prior version of MemGPT, you may need to re-run
memgpt configure
to update your configuration settings to be compatible with this version.
- If you've been using a prior version of MemGPT, you may need to re-run
- Configurable Presets: Presets have been refactored to allow developers to customize the set of functions and system prompts MemGPT uses.
What's Changed
- Configurable presets to support easy extension of MemGPT's function set by @cpacker in #420
- WebSocket interface and basic
server.py
process by @cpacker in #399 - patch
getargspec
error by @cpacker in #440 - always cast
config.context_window
toint
before use by @cpacker in #444 - Refactor config + determine LLM via
config.model_endpoint_type
by @sarahwooders in #422 - Update config to include
memgpt_version
and re-run configuration for old versions onmemgpt run
by @sarahwooders in #450 - Add load and load_and_attach functions to memgpt autogen agent. by @wrmedford in #430
- Update documentation [local LLMs, presets] by @cpacker in #453
- When default_mode_endpoint has a value, it needs to become model_endp… by @kfsone in #452
- Upgrade workflows to Python 3.11 by @sarahwooders in #441
New Contributors
Full Changelog: 0.2.2...0.2.3
0.2.2
What's Changed
- Fix MemGPTAgent attach docs error by @anjaleeps in #427
- [fix] remove asserts for
OPENAI_API_BASE
by @sarahwooders in #432 - Patch for #434 (context window value not used by
memgpt run
) by @cpacker in #435 - Patch for #428 (context window not passed to summarize calls) by @cpacker in #433
- [version] bump release to 0.2.2 by @cpacker in #436
New Contributors
- @anjaleeps made their first contribution in #427
Full Changelog: 0.2.1...0.2.2