You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I run Vroom and OSRM as separate containers, using the Vroom API to optimize routes. While this works well for small datasets (up to 1500 locations), I encounter "socket hangup" with larger datasets. To address this, I tried precomputing the matrix by directly calling the OSRM container and passing it as part of the Vroom API payload. However, the Vroom API still attempts its call to OSRM, causing further issues. I also tried increasing timeout limits, but this did not solve the problem for larger datasets. Is there a way to completely bypass Vroom's internal OSRM call and force it to use a precomputed matrix? Alternatively, are there better approaches to handle larger datasets effectively, such as splitting them into smaller batches? Additionally, will this approach compromise the overall route optimization if the dataset is split into smaller batches?
The text was updated successfully, but these errors were encountered:
Can you be more specific: do you get an error from VROOM or from elsewhere in your system? Did you try generating the same matrix calls to OSRM directly? Did you investigate possible timeouts/restrictions on the container side?
The only socket-related thing we have is in asio, the library that handles the routing calls. Working around the problem by providing matrices directly is possible, but it would be interesting to investigate in order to fully understand what's going on here.
I run Vroom and OSRM as separate containers, using the Vroom API to optimize routes. While this works well for small datasets (up to 1500 locations), I encounter "socket hangup" with larger datasets. To address this, I tried precomputing the matrix by directly calling the OSRM container and passing it as part of the Vroom API payload. However, the Vroom API still attempts its call to OSRM, causing further issues. I also tried increasing timeout limits, but this did not solve the problem for larger datasets. Is there a way to completely bypass Vroom's internal OSRM call and force it to use a precomputed matrix? Alternatively, are there better approaches to handle larger datasets effectively, such as splitting them into smaller batches? Additionally, will this approach compromise the overall route optimization if the dataset is split into smaller batches?
The text was updated successfully, but these errors were encountered: