-
Hi! I was checking the code in hope to understand how the backups are done and encountered this bit: func (bm *BackupManager) backupFull(ctx context.Context, op bkop.Operator) error {
dumpDir := filepath.Join(bm.workDir, "dump")
if err := os.MkdirAll(dumpDir, 0755); err != nil {
return fmt.Errorf("failed to make dump directory: %w", err)
}
defer os.RemoveAll(dumpDir)
if err := op.DumpFull(ctx, dumpDir); err != nil {
return fmt.Errorf("failed to take a full dump: %w", err)
} Am I understanding this right: moco-backup is going to dump DBs to a folder and then pipe-compress it into S3? I am having a database with around 600GB, does it mean I need to have ~600GB free space for the moco-backup container? Can I set up a storage-class for it? Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
correct, but compression takes place beforehand.
not correct. moco uses mysqlsh for backup that compresses data with zstd on the fly.
Yes. We are taking backup of database of ~ 2 TB using TopoLVM for generic ephemeral volume. |
Beta Was this translation helpful? Give feedback.
correct, but compression takes place beforehand.
not correct. moco uses mysqlsh for backup that compresses data with zstd on the fly.
The result will be much smaller.
The exact directory usage can be checked through MySQLCluster's
status.backup.workDirUsage
field after the backup.Yes. We are taking backup of database of ~ 2 TB using TopoLVM for generic ephemeral volume.
Specify the volume spec in BackupPolicy at
spec.jobConfig.workVolume
.https://github.com/cybozu-go/moco/blob/main/docs/usag…