Replies: 2 comments 1 reply
-
Hey so yes Marmot is being used in production. At boot time Marmot tries to flush everything to disk, so that DB is snapshot ready and there are no outstanding WAL changes. If you have a process that is locking down DB and won't release lock under (10s or 30s I have to check), in that case this will fail. Since Marmot is a process based replication it's very important that you don't hold lock on DB forever because then Marmot can't step in and replicate changes. |
Beta Was this translation helpful? Give feedback.
-
Thanks for input about the lock and WAL ... I've never dealt with sql/sqlite3 in this manner before and understand all the nuances of it. Turns out, it was litestream locking it and not my application in Docker ... :-) Thank you for Marmot, just what I need to test replication of my database! Now to figure out the cluster stuff. |
Beta Was this translation helpful? Give feedback.
-
Hi All,
I was under the impression that marmot can be used on a production (in-use/live) database. Am I mistaken about that?
I have a db.sqlite3 in-use within a docker container and when I run marmot with the db_path to the live production path, it doesn't work. But, if I copy the db.sqlite3 lite to /tmp and update the toml config file, it works fine.
I can get the test config using /tmp path with a 2nd node working properly as well. It's only during this testing of pointing to the live production db path that I am seeing the forcing wal checkpoint...
vw1-test.toml config
Beta Was this translation helpful? Give feedback.
All reactions