Releases: piercefreeman/gpt-json
v0.5.1
What's Changed
- Unit tests failing due to model deprecated_date by @Alon-Fux in #45
- Added GPT 4o and 4o mini by @LuggaPugga in #47
New Contributors
- @Alon-Fux made their first contribution in #45
- @LuggaPugga made their first contribution in #47
Full Changelog: v0.5.0...v0.5.1
v0.5.0
What's Changed
- Upgrade OpenAI to APIV1 by @piercefreeman in #43
- Support GPT Vision by @piercefreeman in #44
GPT Vision
This release adds support for GPT Vision models by switching to the new content
array syntax for prompting the GPT API. This allows messages to have an arbitrary amount of content specified, versus just the single string that was true in older API versions. If you provide a raw string to the GPTMessage constructor, we will handle this conversion internally. This makes the changes reverse compatible with older gpt-json versions.
You can use with the same syntax as sending a standard text message:
response = await gpt_json.run(
messages=[
GPTMessage(role=GPTMessageRole.SYSTEM, content=SYSTEM_PROMPT.strip()),
GPTMessage(
role=GPTMessageRole.USER,
content=[
TextContent(text="Message text"),
ImageContent.from_url(my_image_url)
],
),
],
format_variables=dict(post_content=post_text),
max_response_tokens=500,
)
New Included Models
We upgrade our supported GPTModelVersion
to mirror the currently available list of models within the API.
class GPTModelVersion(Enum):
GPT_3_5_0613
GPT_3_5_1106
GPT_3_5_0125
GPT_4_0613
GPT_4_32K_0613
GPT_4_VISION_PREVIEW_1106
v0.4.2
What's Changed
- Use pydantic generic handlers by @piercefreeman in #42
Full Changelog: v0.4.1...v0.4.2
Release 0.4.1
v0.4.1 Bump version to 0.4.1
Release 0.4.0
🎉 Support for function calling syntax, typehinted with Pydantic
gpt-3.5-turbo-0613
and gpt-4-0613
were fine-tuned to support a specific syntax for function calls. This release adds support for these function calls, alongside typehinted support for their input arguments.
This PR is slightly backwards incompatible with 0.3.0. Instead of expanding the tuple list that is returned by gpt_json.run(), we migrate to a RunResponse object. This object will return the JSON object, fix transformations, and now the functions that were parsed from the response payload. If there was a failure in parsing, all the fields will be None. This saves users from having to independently parse the different response types.
Migrations should be straightforward.
# old syntax
response, _ = await gpt_json.run(...)
print(response)
# new syntax
payload = await gpt_json.run(...)
print(payload.response)
We also now require Pydantic V2. If you want to continue to use Pydantic V1, lock your version to the 0.3.x minor version.
Release 0.3.0
Support Pydantic V2 alongside Pydantic V1.
Release 0.2.0
Introduce a new model that allows clients to request multiple instances of their base schema back as the output of GPT. This replaces the old way of typehinting this behavior, via a list[] type alias.
from gpt_json.gpt import GPTJSON, ListResponse
gpt_json_multiple = GPTJSON[ListResponse[SentimentSchema]](API_KEY)
Release 0.1.12
v0.1.12 Bump version to 0.1.12
Release 0.1.11
gpt-json
has been type stable since #17, but we haven't published a py.typed to indicate to downstream mypy installs that the types are available. This release adds this file so client typecheckers can validate the passed values to GPTJSON.
Release 0.1.10
Transitions the GPTJSON(timeout=X) parameter to act as most clients intend, namely serving as an upper bound for each request when it's sent to the server. To maintain backwards compatibility, however, we now default the timeout parameter to None which won't enforce a timeout on the server requests.