Warning
Deprecated: Use ollama-rs instead
Asynchronous Rust bindings of Ollama REST API, using reqwest, tokio, serde, and chrono.
cargo add ollama-rest@0.3
name | status |
---|---|
Completion | Supported ✅ |
Embedding | Supported ✅ |
Model creation | Supported ✅ |
Model deletion | Supported ✅ |
Model pulling | Supported ✅ |
Model copying | Supported ✅ |
Local models | Supported ✅ |
Running models | Supported ✅ |
Model pushing | Experimental 🧪 |
Tools | Experimental 🧪 |
See source of this example.
use std::io::Write;
use ollama_rest::{models::generate::{GenerationRequest, GenerationResponse}, Ollama};
use serde_json::json;
// By default checking Ollama at 127.0.0.1:11434
let ollama = Ollama::default();
let request = serde_json::from_value::<GenerationRequest>(json!({
"model": "llama3.2:1b",
"prompt": "Why is the sky blue?",
})).unwrap();
let mut stream = ollama.generate_streamed(&request).await.unwrap();
while let Some(Ok(res)) = stream.next().await {
if !res.done {
print!("{}", res.response);
// Flush stdout for each word to allow realtime output
std::io::stdout().flush().unwrap();
}
}
println!();
Or, make your own chatbot interface! See this example (CLI) and this example (REST API).