A web crawler written in Rust.
This crawler creates a web graph by exploring all URLs that it finds.
The crawler is split into two parts:
- The connection pool
- The parser pool
The crawler will spin up as many connections & parsers as you specify.
The connection pool will handle all HTTP requests, while the parser pool will handle all HTML parsing.
Requests to the same domain are rate limited to avoid being blocked by the server.
The URL mapping is written to an index which can be written to disk during shutdown.
- Tokio - asynchronous runtime
- Tokio-utils - rate limiter, graceful shutdown
- Reqwest - HTTP client
- Dashmap - concurrent hash map