After successfully setting up and debugging the local Ollama model, when trying to make requests using scrapegraphai with the Ollama API, I consistently receive a 502 Bad Gateway error and am unable to get the expected responses. Despite the Ollama service running locally and being tested successfully with curl, the issue persists when calling it via scrapegraphai. #790
Replies: 2 comments
-
Strangely when i'm run on colab there's no problem at all. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Issue Description:
After successfully setting up and debugging the local Ollama model, when trying to make requests using scrapegraphai with the Ollama API, I consistently receive a 502 Bad Gateway error and am unable to get the expected responses. Despite the Ollama service running locally and being tested successfully with curl, the issue persists when calling it via scrapegraphai.
Environment & Versions:
OS: macOS 13.x
Python Version: 3.12
Libraries & Frameworks:
scrapegraphai: for web scraping and generating answers
langchain: for constructing and invoking language models
Ollama Local Models: mistral:latest, nomic-embed-text:latest
Port: 11434 (default API port for Ollama service)
Steps to Reproduce:
Local Model Setup and Start:
Successfully started the Ollama service locally using ollama serve.
Verified Ollama service is running by using curl http://localhost:11434/.
Configured scrapegraphai for Web Scraping: Set up the scrapegraphai configuration file to call the local Ollama API and scrape web data.
python
复制代码
from scrapegraphai.graphs import SmartScraperGraph
graph_config = {
"llm": {
"model": "ollama/mistral",
"temperature": 0,
"format": "json", # Ollama requires explicit format
"base_url": "http://localhost:11434", # Ollama API URL
},
"embeddings": {
"model": "ollama/nomic-embed-text",
"base_url": "http://localhost:11434", # Ollama API URL
},
"verbose": True,
}
smart_scraper_graph = SmartScraperGraph(
prompt="List me all the projects with their descriptions",
source="https://perinim.github.io/projects",
config=graph_config
)
result = smart_scraper_graph.run()
print(result)
Encountered Error:
When calling smart_scraper_graph.run(), the request consistently returns a 502 Bad Gateway error:
bash
复制代码
HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 502 Bad Gateway"
The full debug log shows:
less
复制代码
DEBUG:httpcore.connection:connect_tcp.started host='127.0.0.1' port=7890 local_address=None timeout=None socket_options=None
DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x1796da6c0>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'POST']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 502, b'Bad Gateway', [(b'Connection', b'close'), (b'Content-Length', b'0')])
INFO:httpx:HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 502 Bad Gateway"
Debugging Information:
Ollama Service Status:
Confirmed that Ollama service is running by checking the process with ps aux | grep ollama and by using curl http://localhost:11434/ (which returns "Ollama is running").
Attempted to make a curl request to http://localhost:11434/api/chat but received 404 Not Found.
Local Network Configuration:
Verified that Ollama is listening on port 11434 by running netstat -an | grep 11434 and it shows that the port is in the LISTEN state.
Also, confirmed the connection to the local service is available via localhost.
Expected Behavior:
The local Ollama service should respond successfully to the scrapegraphai requests, but instead, I receive a 502 Bad Gateway response.
Potential Areas for Help:
Why am I receiving a 502 Bad Gateway despite Ollama running correctly locally?
Any configuration changes needed for scrapegraphai to work with local Ollama models?
Any known issues with the interaction between Ollama and scrapegraphai that could be causing this issue?
Beta Was this translation helpful? Give feedback.
All reactions