Need help for best practice on a multi-server set up. #1059
Replies: 3 comments
-
Hi, We have an API (server 1 in your case) that has access to the database and models. Then we have a service (server 2 in your case) that does a lot of the processing. We have set it up as follows: The API contains a "dispatch" queue where it adds jobs. These jobs are then received by the service using the queue and the service does the processing. Then, the service sends a POST to the API with the results of the work. This is working very well for us. I like this approach for a few reasons.
|
Beta Was this translation helpful? Give feedback.
-
Hi @bilalshaikh42 thank you very much for the reply here and helping me with my question. Your website and repo is way more professional than mine so please bear with me as I try to understand better. For clarification:
If this is correct, then in your API server, do you have a special route that processes all incoming requests from the service? I'm thinking of how to do this securely. Here are some example flows that I'm building: User password reset:
I think this flow is easy for me to understand, but I get confused when I need to do a cron job. Example below: Scraping job:
Does this logic sound correct for the scraping job? Thank you so much i'm really starting to understand the power of all of this setup. |
Beta Was this translation helpful? Give feedback.
-
Yup, you got it. Our API is a standard REST API, with different routes representing different resources. So
While this would work, I don't know enough about your use case to comment on this. If all you are doing is sending an email, why not have the service just send the email directly? Simmillarly, rather than having a separate dispatch and email queue, you could just add a job to the email queue directly.
So BullMQ has the concept of repeatable jobs. Instead of managing the repeat manually, does this sound more like what you are trying to achieve? Regarding step 2, "POST to server 2" , you can just have server 2 directly listen to the queue for those jobs. Unless you really need to have only one queue attached to the API, you can just put the appropriate job (mail/scrape, etc) into the appropriate queue and have the service pick them up directly. In general, you can have many different types of jobs and queues without needing a single "dispatch" queue to go through. In our case, we used a single queue to communicate between the API and service, but that is only because that is what made sense for our particular use-case. The API is only really aware of one type of job that needs to be done. |
Beta Was this translation helpful? Give feedback.
-
Server 1: hosts my node app and has a connection to my DB with all my models. No connection to bullmq or redis.
Server 2: receives POST requests with job details through a REST endpoint from Server 1 using node-fetch whenever a job needs to be added to my worker. Server 2 has connection to redis and bullmq where it processes the jobs but no access to my models or DB.
My question: After jobs are processed on Server 2, should I send a POST request BACK to Server 1 since it has access to the DB and all models? Or should I duplicate my models and DB connection on Server 2 so Server 2 doesn't need to "report back" to Server 1 to complete the job?
Alternatively: Should I just throw bullmq and redis as more dependencies on Server 1 and handle everything on it and remove Server 2 from the equation?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions