Skip to content

Commit

Permalink
🌿 Fern Regeneration -- November 6, 2024 (#211)
Browse files Browse the repository at this point in the history
Co-authored-by: fern-api <115122769+fern-api[bot]@users.noreply.github.com>
Co-authored-by: twitchard <richard.marmorstein@gmail.com>
  • Loading branch information
fern-api[bot] and twitchard authored Nov 7, 2024
1 parent 72f350b commit ab7c0a1
Show file tree
Hide file tree
Showing 7 changed files with 145 additions and 106 deletions.
21 changes: 21 additions & 0 deletions .mock/definition/empathic-voice/__package__.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1070,8 +1070,12 @@ types:
enum:
- value: claude-3-5-sonnet-latest
name: Claude35SonnetLatest
- value: claude-3-5-haiku-latest
name: Claude35HaikuLatest
- value: claude-3-5-sonnet-20240620
name: Claude35Sonnet20240620
- value: claude-3-5-haiku-20241022
name: Claude35Haiku20241022
- value: claude-3-opus-20240229
name: Claude3Opus20240229
- value: claude-3-sonnet-20240229
Expand Down Expand Up @@ -3086,6 +3090,23 @@ types:
from a [User
Input](/reference/empathic-voice-interface-evi/chat/chat#send.User%20Input.text)
message.
interim:
type: boolean
docs: >-
Indicates if this message contains an immediate and unfinalized
transcript of the user’s audio input. If it does, words may be
repeated across successive `UserMessage` messages as our transcription
model becomes more confident about what was said with additional
context. Interim messages are useful to detect if the user is
interrupting during audio playback on the client. Even without a
finalized transcription, along with
[UserInterrupt](/reference/empathic-voice-interface-evi/chat/chat#receive.User%20Interruption.type)
messages, interim `UserMessages` are useful for detecting if the user
is interrupting during audio playback on the client, signaling to stop
playback in your application. Interim `UserMessages` will only be
received if the
[verbose_transcription](/reference/empathic-voice-interface-evi/chat/chat#request.query.verbose_transcription)
query parameter is set to `true` in the handshake request.
source:
openapi: assistant-asyncapi.json
JsonMessage:
Expand Down
10 changes: 10 additions & 0 deletions .mock/definition/empathic-voice/chat.yml
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,16 @@ channel:
Use the GET `/v0/evi/chat_groups` endpoint to obtain the Chat Group IDs
of all Chat Groups associated with an API key. This endpoint returns a
list of all available chat groups.
verbose_transcription:
type: optional<boolean>
docs: >-
A flag to enable verbose transcription. Set this query parameter to
`true` to have unfinalized user transcripts be sent to the client as
interim UserMessage messages. The
[interim](/reference/empathic-voice-interface-evi/chat/chat#receive.User%20Message.interim)
field on a
[UserMessage](/reference/empathic-voice-interface-evi/chat/chat#receive.User%20Message.type)
denotes whether the message is "interim" or "final."
access_token:
type: optional<string>
docs: >-
Expand Down
197 changes: 92 additions & 105 deletions poetry.lock

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "hume"
version = "0.7.4"
version = "0.7.5"
description = "A Python SDK for Hume AI"
readme = "README.md"
authors = []
Expand Down
14 changes: 14 additions & 0 deletions src/hume/empathic_voice/chat/socket_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,10 @@ class ChatConnectOptions(typing.TypedDict, total=False):

secret_key: typing.Optional[str]

resumed_chat_group_id: typing.Optional[str]

verbose_transcription: typing.Optional[bool]


class ChatWebsocketConnection:
DEFAULT_NUM_CHANNELS: typing.ClassVar[int] = 1
Expand Down Expand Up @@ -204,6 +208,16 @@ async def _construct_ws_uri(self, options: typing.Optional[ChatConnectOptions]):
query_params = query_params.add(
"config_version", maybe_config_version
)
maybe_resumed_chat_group_id = options.get("resumed_chat_group_id")
if maybe_resumed_chat_group_id is not None:
query_params = query_params.add(
"resumed_chat_group_id", maybe_resumed_chat_group_id
)
maybe_verbose_transcription = options.get("verbose_transcription")
if maybe_verbose_transcription is not None:
query_params = query_params.add(
"verbose_transcription", "true" if maybe_verbose_transcription else "false"
)
maybe_secret_key = options.get("secret_key")
if maybe_secret_key is not None and api_key is not None:
query_params = query_params.add(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@
ReturnLanguageModelModelResource = typing.Union[
typing.Literal[
"claude-3-5-sonnet-latest",
"claude-3-5-haiku-latest",
"claude-3-5-sonnet-20240620",
"claude-3-5-haiku-20241022",
"claude-3-opus-20240229",
"claude-3-sonnet-20240229",
"claude-3-haiku-20240307",
Expand Down
5 changes: 5 additions & 0 deletions src/hume/empathic_voice/types/user_message.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,11 @@ class UserMessage(UniversalBaseModel):
Indicates if this message was inserted into the conversation as text from a [User Input](/reference/empathic-voice-interface-evi/chat/chat#send.User%20Input.text) message.
"""

interim: bool = pydantic.Field()
"""
Indicates if this message contains an immediate and unfinalized transcript of the user’s audio input. If it does, words may be repeated across successive `UserMessage` messages as our transcription model becomes more confident about what was said with additional context. Interim messages are useful to detect if the user is interrupting during audio playback on the client. Even without a finalized transcription, along with [UserInterrupt](/reference/empathic-voice-interface-evi/chat/chat#receive.User%20Interruption.type) messages, interim `UserMessages` are useful for detecting if the user is interrupting during audio playback on the client, signaling to stop playback in your application. Interim `UserMessages` will only be received if the [verbose_transcription](/reference/empathic-voice-interface-evi/chat/chat#request.query.verbose_transcription) query parameter is set to `true` in the handshake request.
"""

if IS_PYDANTIC_V2:
model_config: typing.ClassVar[pydantic.ConfigDict] = pydantic.ConfigDict(extra="allow", frozen=True) # type: ignore # Pydantic v2
else:
Expand Down

0 comments on commit ab7c0a1

Please sign in to comment.