Skip to content

Commit

Permalink
Merge pull request #7601 from tangledbytes/utkarsh/add/migration-wal
Browse files Browse the repository at this point in the history
[NC | NSFS] WAL based tape migrations and recalls
  • Loading branch information
tangledbytes authored Feb 14, 2024
2 parents f665bd6 + c696970 commit 3fb4ab3
Show file tree
Hide file tree
Showing 25 changed files with 1,523 additions and 83 deletions.
33 changes: 27 additions & 6 deletions config.js
Original file line number Diff line number Diff line change
Expand Up @@ -722,6 +722,30 @@ config.NSFS_RENAME_RETRIES = 3;
config.NSFS_VERSIONING_ENABLED = true;
config.NSFS_UPDATE_ISSUES_REPORT_ENABLED = true;

config.NSFS_GLACIER_LOGS_DIR = '/var/run/noobaa-nsfs/wal';
config.NSFS_GLACIER_LOGS_MAX_INTERVAL = 15 * 60 * 1000;

// NSFS_GLACIER_ENABLED can override internal autodetection and will force
// the use of restore for all objects.
config.NSFS_GLACIER_ENABLED = false;
config.NSFS_GLACIER_LOGS_ENABLED = true;
config.NSFS_GLACIER_BACKEND = 'TAPECLOUD';

// TAPECLOUD Glacier backend specific configs
config.NSFS_GLACIER_TAPECLOUD_BIN_DIR = '/opt/ibm/tapecloud/bin';

// NSFS_GLACIER_MIGRATE_INTERVAL indicates the interval between runs
// of `manage_nsfs glacier migrate`
config.NSFS_GLACIER_MIGRATE_INTERVAL = 15 * 60 * 1000;

// NSFS_GLACIER_RESTORE_INTERVAL indicates the interval between runs
// of `manage_nsfs glacier restore`
config.NSFS_GLACIER_RESTORE_INTERVAL = 15 * 60 * 1000;

// NSFS_GLACIER_EXPIRY_INTERVAL indicates the interval between runs
// of `manage_nsfs glacier expiry`
config.NSFS_GLACIER_EXPIRY_INTERVAL = 12 * 60 * 60 * 1000;

////////////////////////////
// NSFS NON CONTAINERIZED //
////////////////////////////
Expand All @@ -742,11 +766,8 @@ config.BASE_MODE_CONFIG_DIR = 0o700;

config.NSFS_WHITELIST = [];

// NSFS_RESTORE_ENABLED can override internal autodetection and will force
// the use of restore for all objects.
config.NSFS_RESTORE_ENABLED = false;
config.NSFS_HEALTH_ENDPOINT_RETRY_COUNT = 3
config.NSFS_HEALTH_ENDPOINT_RETRY_DELAY = 10
config.NSFS_HEALTH_ENDPOINT_RETRY_COUNT = 3;
config.NSFS_HEALTH_ENDPOINT_RETRY_DELAY = 10;

//Quota
config.QUOTA_LOW_THRESHOLD = 80;
Expand Down Expand Up @@ -961,4 +982,4 @@ module.exports.reload_nsfs_nc_config = reload_nsfs_nc_config;
load_nsfs_nc_config();
reload_nsfs_nc_config();
load_config_local();
load_config_env_overrides();
load_config_env_overrides();
131 changes: 131 additions & 0 deletions docs/design/NSFSGlacierStorageClass.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
# NSFS Glacier Storage Class

## Goal
- Support "GLACIER" storage class in NooBaa which should behave similar to AWS "GLACIER" storage class.
- NooBaa should allow limited support of `RestoreObject` API.

## Approach
The current approach to support `GLACIER` storage class is to separate the implementation into two parts.
Main NooBaa process only manages metadata on the files/objects via extended attributes and maintains relevant
data in a log file. Another process (currently `manage_nsfs`) manages the actual movements of the files across
disk and tape.

There are 3 primary flows of concern and this document will discuss all 3 of them:
1. Upload object to `GLACIER` storage class (API: `PutObject`).
2. Restore object that are uploaded to `GLACIER` storage class (API: `RestoreObject`).
3. Copy objects where source is an object stored in `GLACIER` (API: `PutObject`).

### WAL
Important component of all the flows is the write ahead log (WAL). NooBaa has a `SimpleWAL` which as name states
is extremely simple in some senses. It does not deal with fsync issues, partial writes, holes, etc. rather just
appends data seperated by a new line character.

`SimpleWAL` features:
1. Exposes an `append` method which adds data to the file.
2. Can perform auto rotation of the file which makes sure that a single WAL is never too huge for the
WAL consumer to consume.
3. Exposes a `process` method which allows "safe" iteration on the previous WAL files.
4. Tries to make sure that no data loss happens due to process level races.

#### Races which are handled by the current implementation
1. `n` processes open the WAL file while a "consumer" swoops and tries to process the file affectively losing the
current writes (due to processing partially written file and ultimately invoking `unlink` on the file) - This isn't
possible as `process` method makes sure that it doesn't iterate over the "current active file".
2. `k` processes out of `n` (such that `k < n`) open the WAL while a "consumer" swoops and tries to process the
file affectively losing the current writes (due to unliking the file others hold reference to) - Although `process`
method will not protect against this as technically "current active file" is a different file but this is still **not**
possible as the "consumer" need to have an "EXCLUSIVE" lock on the files before it can process the file this makes sure
that for as long as any process is writing on the file, the "consumer" cannot consume the file and will block.
3. `k` processes out of `n` (such that `k < n`) open the WAL but before the NSFS process could get a "SHARED" lock on
the file the "consumer" process swoops in and process the files and then issues `unlink` on the file. The unlink will
not delete the file as `k` processes have open FD to the file but as soon as those processes will be done writing to
it and will close the FD, the file will be deleted which will result in lost writes - This isn't possible as `SimpleWAL`
does not allow writing to a file till it can get a lock on the file and ensure that there are `> 0` links to the file.
If there are no links then it tries to open file the again assuming that the consumer has issued `unlink` on the file
it holds the FD to.
4. Multiple processes try to swap the same file causing issues - This isn't possible as the process needs to acquire
a "swap lock" before it performs the swap which essentially serializes the operations. Further the swapping is done only
once by ensuring that the process only swaps if the current `inode` matches with the `inode` it got when it opened the
file initially, if not it skips the swapping.

### Requirements for `TAPECLOUD` backend
1. Scripts should be placed in `config.NSFS_GLACIER_TAPECLOUD_BIN_DIR` dir.
2. `migrate` script should take a file name and perform migrations of the files mentioned in the given file. The output should comply with `eeadm migrate` command.
3. `recall` script should take a file name and perform recall of the files mentioned in the given file. The output should comply with `eeadm recall` command.
3. `task_show` script should take a task id as argument and output its status. The output should be similar to `eeadm task show -r <id>`.
4. `scan_expired` should take a directory name and dump files in it. The files should have the names of all the files which need to be migrated back to disk. The names should be newline separated.
5. `low_free_space` script should output `true` if the disk has low free space or else should return `false`.

### Flow 1: Upload Object to Glacier
As mentioned earlier, any operation that is related to `GLACIER` are handled in 2 phases. One phase is immediate
which is managed my the NSFS process itself while another phase is something which needs to be invoked seperately
which manages the actual movements of the file.

#### Phase 1
1. PutObject is requested with storage class set to `GLACIER`.
2. NooBaa rejects the request if NooBaa isn't configured to support the given storage class. This is **not** enabled
by default and needs to be enabled via `config-local.js` by setting `config.NSFS_GLACIER_ENABLED = true` and `config.NSFS_GLACIER_LOGS_ENABLED = true`.
3. NooBaa will set the storage class to `GLACIER` by setting `user.storage_class` extended attribute.
4. NooBaa creates a simple WAL (Write Ahead Log) and appends the filename to the log file.
5. Completes the upload.

Once the upload is complete, the file sits on the disk till the second process kicks in and actually does the movement
of the file but main NooBaa process does not concerns itself with the actual file state and rather just relies on the
extended attributes to judge the state of the file. The implications of this is that NooBaa will refuse a file read operation
even if the file is on disk unless the user explicitly issues a `RestoreObject` (It should be noted that this is what AWS
does as well).

#### Phase 2
1. A scheduler (eg. Cron, human, script, etc) issues `node src/cmd/manage_nsfs glacier migrate --interval <val>`.
2. The command will first acquire an "EXCLUSIVE" lock so as to ensure that only one tape management command is running at once.
3. Once the process has the lock it will start to iterate over the potentially currently inactive files.
4. Before processing a WAL file, the proceess will get an "EXCLUSIVE" lock to the file ensuring that it is indeed the only
process processing the file.
5. It will read the WAL one line at a time and will ensure the following:
1. The file still exists.
2. The file is still has `GLACIER` storage class. (This is can happen if the user uploads another object with `STANDARD`
storage class).
3. The file doesn't have any of the `RestoreObject` extended attributes. This is to ensure that if the file was marked
for restoration as soon as it was uploaded then we don't perform the migration at all. This is to avoid unnecessary
work and also make sure that we don't end up racing with ourselves.
6. Once a file name passes through all the above criterions then we add its name to a temporary WAL and handover the file
name to `migrate` script which should be in `config.NSFS_GLACIER_TAPECLOUD_BIN_DIR` directory. We expect that the script will take the file name as its first parameter and will perform the migration. If the `config.NSFS_GLACIER_BACKEND` is set to `TAPECLOUD` (default) then we expect the script to output data in compliance with `eeadm migrate` command.
7. We delete the temporary WAL that we created.
8. We delete the WAL created by NSFS process **iff** there were no failures in `migrate`. In case of failures we skip the WAL
deletion as a way to retry during the next trigger of the script. It should be noted that NooBaa's `migrate` (`TAPECLOUD` backend) invocation does **not** consider `DUPLICATE TASK` an error.

### Flow 2: Restore Object
As mentioned earlier, any operation that is related to `GLACIER` are handled in 2 phases. One phase is immediate
which is managed my the NSFS process itself while another phase is something which needs to be invoked seperately
which manages the actual movements of the file.

#### Phase 1
1. RestoreObject is requested with non-zero positive number of days.
2. NooBaa rejects the request if NooBaa isn't configured to support the given storage class. This is **not** enabled
by default and needs to be enabled via `config-local.js` by setting `config.NSFS_GLACIER_ENABLED = true` and `config.NSFS_GLACIER_LOGS_ENABLED = true`.
3. NooBaa performs a number of checks to ensure that the operation is valid (for example there is no already ongoing
restore request going on etc).
4. NooBaa saves the filename to a simple WAL (Write Ahead Log).
5. Returns the request with success indicating that the restore request has been accepted.

#### Phase 2
1. A scheduler (eg. Cron, human, script, etc) issues `node src/cmd/manage_nsfs glacier restore --interval <val>`.
2. The command will first acquire an "EXCLUSIVE" lock so as to ensure that only one tape management command is running at once.
3. Once the process has the lock it will start to iterate over the potentially currently inactive files.
4. Before processing a WAL file, the proceess will get an "EXCLUSIVE" lock to the file ensuring that it is indeed the only
process processing the file.
5. It will read the WAL one line at a time and will store the names of the files that we expect to fail during an eeadm restore
(this can happen for example because a `RestoreObject` was issued for a file but later on that file was deleted before we could
actually process the file).
6. The WAL is handed over to `recall` script which should be present in `config.NSFS_GLACIER_TAPECLOUD_BIN_DIR` directory. We expect that the script will take the file name as its first parameter and will perform the recall. If the `config.NSFS_GLACIER_BACKEND` is set to `TAPECLOUD` (default) then we expect the script to output data in compliance with `eeadm recall` command.
7. If we get any unexpected failures then we mark it a failure and make sure we do not delete the WAL file (so as to retry later).
8. We iterate over the WAL again to set the final extended attributes. This is to make sure that we can communicate the latest with
the NSFS processes.

### Flow 3: Copy Object with Glacier Object as copy source
This is very similar to Flow 1 with some additional checks.
If the source file is not in `GLACIER` storage class then normal procedure kicks in.
If the source file is in `GLACIER` storage class then:
- NooBaa refuses the copy if the file is not already restored (similar to AWS behaviour).
- NooBaa accepts the copy if the file is already restored (similar to AWS behaviour).

29 changes: 28 additions & 1 deletion src/cmd/manage_nsfs.js
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,12 @@ const ManageCLIError = require('../manage_nsfs/manage_nsfs_cli_errors').ManageCL
const NSFS_CLI_ERROR_EVENT_MAP = require('../manage_nsfs/manage_nsfs_cli_errors').NSFS_CLI_ERROR_EVENT_MAP;
const ManageCLIResponse = require('../manage_nsfs/manage_nsfs_cli_responses').ManageCLIResponse;
const NSFS_CLI_SUCCESS_EVENT_MAP = require('../manage_nsfs/manage_nsfs_cli_responses').NSFS_CLI_SUCCESS_EVENT_MAP;
const manage_nsfs_glacier = require('../manage_nsfs/manage_nsfs_glacier');
const bucket_policy_utils = require('../endpoint/s3/s3_bucket_policy_utils');
const nsfs_schema_utils = require('../manage_nsfs/nsfs_schema_utils');
const { print_usage } = require('../manage_nsfs/manage_nsfs_help_utils');
const { TYPES, ACTIONS, VALID_OPTIONS, OPTION_TYPE,
LIST_ACCOUNT_FILTERS, LIST_BUCKET_FILTERS} = require('../manage_nsfs/manage_nsfs_constants');
LIST_ACCOUNT_FILTERS, LIST_BUCKET_FILTERS, GLACIER_ACTIONS } = require('../manage_nsfs/manage_nsfs_constants');
const NoobaaEvent = require('../manage_nsfs/manage_nsfs_events_utils').NoobaaEvent;

function throw_cli_error(error_code, detail, event_arg) {
Expand Down Expand Up @@ -105,6 +106,8 @@ async function main(argv = minimist(process.argv.slice(2))) {
await bucket_management(argv, from_file);
} else if (type === TYPES.IP_WHITELIST) {
await whitelist_ips_management(argv);
} else if (type === TYPES.GLACIER) {
await glacier_management(argv);
} else {
// we should not get here (we check it before)
throw_cli_error(ManageCLIError.InvalidType);
Expand Down Expand Up @@ -822,6 +825,8 @@ function validate_type_and_action(type, action) {
if (!Object.values(ACTIONS).includes(action)) throw_cli_error(ManageCLIError.InvalidAction);
} else if (type === TYPES.IP_WHITELIST) {
if (action !== '') throw_cli_error(ManageCLIError.InvalidAction);
} else if (type === TYPES.GLACIER) {
if (!Object.values(GLACIER_ACTIONS).includes(action)) throw_cli_error(ManageCLIError.InvalidAction);
}
}

Expand All @@ -838,6 +843,8 @@ function validate_no_extra_options(type, action, input_options) {
valid_options = VALID_OPTIONS.bucket_options[action];
} else if (type === TYPES.ACCOUNT) {
valid_options = VALID_OPTIONS.account_options[action];
} else if (type === TYPES.GLACIER) {
valid_options = VALID_OPTIONS.glacier_options[action];
} else {
valid_options = VALID_OPTIONS.whitelist_options;
}
Expand Down Expand Up @@ -942,6 +949,26 @@ function _validate_access_keys(argv) {
})) throw_cli_error(ManageCLIError.AccountSecretKeyFlagComplexity);

}
async function glacier_management(argv) {
const action = argv._[1] || '';
await manage_glacier_operations(action, argv);
}

async function manage_glacier_operations(action, argv) {
switch (action) {
case GLACIER_ACTIONS.MIGRATE:
await manage_nsfs_glacier.process_migrations();
break;
case GLACIER_ACTIONS.RESTORE:
await manage_nsfs_glacier.process_restores();
break;
case GLACIER_ACTIONS.EXPIRY:
await manage_nsfs_glacier.process_expiry();
break;
default:
throw_cli_error(ManageCLIError.InvalidGlacierOperation);
}
}

exports.main = main;
if (require.main === module) main();
8 changes: 8 additions & 0 deletions src/deploy/NVA_build/standalone_deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,19 @@ function execute() {
fi
}

function sigterm() {
echo "SIGTERM received"
kill -TERM $(jobs -p)
exit 0
}

function main() {
if [ "${STANDALONE_SETUP_ENV}" = "true" ]; then
setup_env
fi

trap sigterm SIGTERM

# Start NooBaa processes
execute "npm run web" web.log
sleep 10
Expand Down
6 changes: 6 additions & 0 deletions src/manage_nsfs/manage_nsfs_cli_errors.js
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,12 @@ ManageCLIError.InvalidAccountDistinguishedName = Object.freeze({
message: 'Account distinguished name was not found',
http_code: 400,
});
ManageCLIError.InvalidGlacierOperation = Object.freeze({
code: 'InvalidGlacierOperation',
message: 'only "migrate", "restore" and "expiry" subcommands are supported',
http_code: 400,
});


////////////////////////
//// BUCKET ERRORS /////
Expand Down
17 changes: 16 additions & 1 deletion src/manage_nsfs/manage_nsfs_constants.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@
const TYPES = {
ACCOUNT: 'account',
BUCKET: 'bucket',
IP_WHITELIST: 'whitelist'
IP_WHITELIST: 'whitelist',
GLACIER: 'glacier',
};

const ACTIONS = {
Expand All @@ -15,6 +16,12 @@ const ACTIONS = {
STATUS: 'status'
};

const GLACIER_ACTIONS = {
MIGRATE: 'migrate',
RESTORE: 'restore',
EXPIRY: 'expiry',
};

const GLOBAL_CONFIG_ROOT = 'config_root';
const GLOBAL_CONFIG_OPTIONS = new Set(['from_file', GLOBAL_CONFIG_ROOT, 'config_root_backend']);

Expand All @@ -34,11 +41,18 @@ const VALID_OPTIONS_BUCKET = {
'status': new Set(['name', GLOBAL_CONFIG_ROOT]),
};

const VALID_OPTIONS_GLACIER = {
'migrate': new Set([ GLOBAL_CONFIG_ROOT]),
'restore': new Set([ GLOBAL_CONFIG_ROOT]),
'expiry': new Set([ GLOBAL_CONFIG_ROOT]),
};

const VALID_OPTIONS_WHITELIST = new Set(['ips', GLOBAL_CONFIG_ROOT]);

const VALID_OPTIONS = {
account_options: VALID_OPTIONS_ACCOUNT,
bucket_options: VALID_OPTIONS_BUCKET,
glacier_options: VALID_OPTIONS_GLACIER,
whitelist_options: VALID_OPTIONS_WHITELIST,
};

Expand Down Expand Up @@ -70,6 +84,7 @@ const LIST_BUCKET_FILTERS = ['name'];
// EXPORTS
exports.TYPES = TYPES;
exports.ACTIONS = ACTIONS;
exports.GLACIER_ACTIONS = GLACIER_ACTIONS;
exports.VALID_OPTIONS = VALID_OPTIONS;
exports.OPTION_TYPE = OPTION_TYPE;

Expand Down
Loading

0 comments on commit 3fb4ab3

Please sign in to comment.