Replies: 1 comment
-
When a model becomes available via the API it will become available here automatically if it includes See in @app.post("/availableModels")
async def available_models():
try:
# Get available models from OpenAI
openai.api_key = os.getenv("OPENAI_API_KEY")
model_list = openai.Model.list()
# Filter to only keep supported models.
supported_models = [model['id'] for model in model_list['data'] if "gpt" in model['id'].lower()]
return JSONResponse(content={"models": supported_models})
except Exception as e:
return HTTPException(status_code=500, detail=f"An error occurred: {str(e)}") The latency is likely cause by the separate calls for speech-to-text and querying openai in the listen logic in |
Beta Was this translation helpful? Give feedback.
-
Wondering whether it's an easy as a variable change to utilize ChatGPT4-o that was announced today? I'm wondering if it might help with some of the latency issues between the wake word and the response? Thanks again this is a fun project!
Beta Was this translation helpful? Give feedback.
All reactions