-
Notifications
You must be signed in to change notification settings - Fork 376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add create audio speech stream support #188
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! 😊 Minor things really; don't forget to update the readme either!
Sources/OpenAI/OpenAI.swift
Outdated
organizationIdentifier: configuration.organizationIdentifier, | ||
timeoutInterval: configuration.timeoutInterval) | ||
let session = StreamingSession<AudioSpeechResult>(urlRequest: request) | ||
session.onReceiveContent = {_, object in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
session.onReceiveContent = {_, object in | |
session.onReceiveContent = { _, object in |
Sources/OpenAI/OpenAI.swift
Outdated
session.onReceiveContent = {_, object in | ||
onResult(.success(object)) | ||
} | ||
session.onProcessingError = {_, error in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
session.onProcessingError = {_, error in | |
session.onProcessingError = { _, error in |
query: AudioSpeechQuery | ||
) -> AsyncThrowingStream<AudioSpeechResult, Error> { | ||
return AsyncThrowingStream { continuation in | ||
return audioCreateSpeechStream(query: query) { result in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return audioCreateSpeechStream(query: query) { result in | |
return audioCreateSpeechStream(query: query) { result in |
Example: | ||
``` | ||
let query = AudioSpeechQuery(model: .tts_1, input: "Hello, world!", voice: .alloy, responseFormat: .mp3, speed: 1.0) | ||
openAI.audioCreateSpeech(query: query) { result in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
openAI.audioCreateSpeech(query: query) { result in | |
openAI.audioCreateSpeechStream(query: query) { result in |
Don't forget to change the docs
@SunburstEnzo Thanks for the comments! PR updated |
Quality Gate passedIssues Measures |
Closing this PR as new PR with the same changes was opened: |
What
OpenAI supports sending audio speech by chunks (see my issue here: #185). This PR adds support for this feature.
Why
With this change, clients can start audio playback with less latency, as far as first audio chunk will be received.
Affected Areas
In
OpenAIProtocol
was added new methodaudioCreateSpeechStream