-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Look into multi instance installations #10
Comments
I would suggest watching https://github.com/BU-NU-CLOUD-SP18/Dataverse-Scaling#our-project-video which I mentioned in a comment at IQSS/dataverse#4040 (comment) This was the final video students at BU made after their efforts to scale Dataverse across multiple Glassfish and PostgreSQL servers. We held weekly meetings during the class and recorded them and posted notes, if those are of interest as well. |
While looking into salvation for #65 I thought about how to sync the files from the docroot to other pods in case of a replication with a It looks like the easiest way to go is via a |
@poikilotherm you should do whatever hacking you need to do to stay unblocked but I'm wondering if upstream Dataverse should evolve or change in some way to make it easier to run multiple (Glassfish) web servers. Should the logos used in the header of dataverses be stored in the the database, for example. (Heads up that logos in the footer are now supported as well, thanks to IQSS/dataverse#6219 .) I believe Harvard Dataverse uses some rsync scripts to keep the logos in sync across two web servers but I'm not sure how well that scales. What I'm trying to say is that I'm definitely open to ideas. 😄 |
Please see also IQSS/dataverse#6491 |
Driving a Dataverse multi-instance installation has some edges and needs to be examined carefully. Obviously, scaling with Kubernetes is super-easy.
http://guides.dataverse.org/en/latest/installation/advanced.html#id1
The text was updated successfully, but these errors were encountered: