Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(replays): add replay_id uuid processors to config #5791

Merged
merged 10 commits into from
Apr 18, 2024

Conversation

JoshFerge
Copy link
Member

split out from #5787

Copy link

github-actions bot commented Apr 17, 2024

This PR has a migration; here is the generated SQL

-- start migrations

-- forward migration discover : 0008_discover_add_replay_id
Local op: ALTER TABLE discover_local ADD COLUMN IF NOT EXISTS replay_id Nullable(UUID) AFTER span_id;
Distributed op: ALTER TABLE discover_dist ADD COLUMN IF NOT EXISTS replay_id Nullable(UUID) AFTER span_id;
-- end forward migration discover : 0008_discover_add_replay_id




-- backward migration discover : 0008_discover_add_replay_id
Distributed op: ALTER TABLE discover_dist DROP COLUMN IF EXISTS replay_id;
Local op: ALTER TABLE discover_local DROP COLUMN IF EXISTS replay_id;
-- end backward migration discover : 0008_discover_add_replay_id

@@ -157,6 +157,7 @@ schema:
},
{ name: app_start_type, type: String },
{ name: profile_id, type: UUID, args: { schema_modifiers: [nullable] } },
{ name: replay_id, type: UUID, args: { schema_modifiers: [nullable] } },
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this needed for transactions?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you also need to add a replay_id column like this to discover.yaml?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because we want to query on this column which exists in the transactions dataset

@@ -88,6 +88,7 @@ schema:
args: { schema_modifiers: [nullable], size: 64 },
},
{ name: deleted, type: UInt, args: { size: 8 } },
{ name: trace_id, type: UUID, args: { schema_modifiers: [nullable] } }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we adding trace_id? Isn't this pr a followup to the migration where you added replay_id?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you also have to add a column to the entities/discover.yaml file?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the catch, should have been replay_id

Copy link

codecov bot commented Apr 18, 2024

Test Failures Detected: Due to failing tests, we cannot provide coverage reports at this time.

❌ Failed Test Results:

Completed 456 tests with 1 failed, 454 passed and 1 skipped.

View the full list of failed tests
Test Description Failure message
Testsuite:
pytest
Test name:
tests.admin.test_api::test_set_allocation_policy_config
Envs:
- default
Traceback (most recent call last):
File ".../tests/admin/test_api.py", line 505, in test_set_allocation_policy_config
assert response.json is not None and len(response.json) == 5
AssertionError: assert ([{'configs': [{'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 22, 'description': 'maximum amount of concurrent queries per tenant', 'name': 'concurrent_limit', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 60, 'description': 'maximum duration of a query in seconds. Queries that exceed this duration are considered finished by the rate limiter. This reduces memory usage. If you turn this down lower than the actual timeout period, the system can start undercounting concurrent queries', 'name': 'max_query_duration_s', 'params': {}, ...}, {'default': 1, 'description': "number of shards that each redis set is supposed to have.\n increasing this value multiplies the number of redis keys by that\n factor, and (on average) reduces the size of each redis set. You probably don't need to change this\n unless you're scaling out redis for some reason\n ", 'name': 'rate_limit_shard_factor', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'override concurrent limit for a specific project, referrer combo', 'name': 'referrer_project_override', 'params': [{'name': 'referrer', 'type': 'str'}, {'name': 'project_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific organization_id, referrer combo', 'name': 'referrer_organization_override', 'params': [{'name': 'referrer', 'type': 'str'}, {'name': 'organization_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific project_id', 'name': 'project_override', 'params': [{'name': 'project_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific organization_id', 'name': 'organization_override', 'params': [{'name': 'organization_id', 'type': 'int'}], ...}], 'policy_name': 'ConcurrentRateLimitAllocationPolicy'}, {'configs': [{'default': -1, 'description': 'Number of bytes a specific org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned_override', 'params': {'org_id': 1}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10000000, 'description': 'Number of bytes any org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 1, 'description': 'Number of threads any throttled query gets assigned.', 'name': 'throttled_thread_number', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'Number of bytes a specific org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned_override', 'params': [{'name': 'org_id', 'type': 'int'}], ...}], 'policy_name': 'BytesScannedWindowAllocationPolicy'}, {'configs': [{'default': 100, 'description': 'how many concurrent requests does a referrer get by default& This is set to a pretty high number.\n If every referrer did this number of concurrent queries we would not have enough capacity\n ', 'name': 'default_concurrent_request_per_referrer', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 60, 'description': 'maximum duration of a query in seconds. Queries that exceed this duration are considered finished by the rate limiter. This reduces memory usage. If you turn this down lower than the actual timeout period, the system can start undercounting concurrent queries', 'name': 'max_query_duration_s', 'params': {}, ...}, {'default': 1, 'description': "number of shards that each redis set is supposed to have.\n increasing this value multiplies the number of redis keys by that\n factor, and (on average) reduces the size of each redis set. You probably don't need to change this\n unless you're scaling out redis for some reason\n ", 'name': 'rate_limit_shard_factor', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'override the concurrent limit for a referrer', 'name': 'referrer_concurrent_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}, {'default': -1, 'description': 'override the max_threads for a referrer, applies to every query made by that referrer', 'name': 'referrer_max_threads_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}], 'policy_name': 'ReferrerGuardRailPolicy'}, {'configs': [{'default': 2560000000000, 'description': 'DEFAULT: how many bytes can an organization scan per referrer in the last 10.0 mins before queries start getting rejected. Cross-project queries are limited by organization_id', 'name': 'organization_referrer_scan_limit', 'params': {}, ...}, {'default': 32000000000, 'description': 'If a clickhouse query times out, how many bytes does the policy assume the query scanned& Increasing the number increases the penalty for queries that time out', 'name': 'clickhouse_timeout_bytes_scanned_penalization', 'params': {}, ...}, {'default': 0, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 1280000000000, 'description': 'DEFAULT: how many bytes can a project scan per referrer in the last 10.0 mins before queries start getting rejected', 'name': 'project_referrer_scan_limit', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'Specific referrer scan limit in the last 10.0 mins, APPLIES TO ALL PROJECTS', 'name': 'referrer_all_projects_scan_limit_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}, {'default': -1, 'description': 'Specific referrer scan limit in the last 10.0 mins, APPLIES TO ALL ORGANIZATIONS', 'name': 'referrer_all_organizations_scan_limit_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}], 'policy_name': 'BytesScannedRejectingPolicy'}] is not None and 4 == 5)
+ where [{'configs': [{'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 22, 'description': 'maximum amount of concurrent queries per tenant', 'name': 'concurrent_limit', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 60, 'description': 'maximum duration of a query in seconds. Queries that exceed this duration are considered finished by the rate limiter. This reduces memory usage. If you turn this down lower than the actual timeout period, the system can start undercounting concurrent queries', 'name': 'max_query_duration_s', 'params': {}, ...}, {'default': 1, 'description': "number of shards that each redis set is supposed to have.\n increasing this value multiplies the number of redis keys by that\n factor, and (on average) reduces the size of each redis set. You probably don't need to change this\n unless you're scaling out redis for some reason\n ", 'name': 'rate_limit_shard_factor', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'override concurrent limit for a specific project, referrer combo', 'name': 'referrer_project_override', 'params': [{'name': 'referrer', 'type': 'str'}, {'name': 'project_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific organization_id, referrer combo', 'name': 'referrer_organization_override', 'params': [{'name': 'referrer', 'type': 'str'}, {'name': 'organization_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific project_id', 'name': 'project_override', 'params': [{'name': 'project_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific organization_id', 'name': 'organization_override', 'params': [{'name': 'organization_id', 'type': 'int'}], ...}], 'policy_name': 'ConcurrentRateLimitAllocationPolicy'}, {'configs': [{'default': -1, 'description': 'Number of bytes a specific org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned_override', 'params': {'org_id': 1}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10000000, 'description': 'Number of bytes any org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 1, 'description': 'Number of threads any throttled query gets assigned.', 'name': 'throttled_thread_number', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'Number of bytes a specific org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned_override', 'params': [{'name': 'org_id', 'type': 'int'}], ...}], 'policy_name': 'BytesScannedWindowAllocationPolicy'}, {'configs': [{'default': 100, 'description': 'how many concurrent requests does a referrer get by default& This is set to a pretty high number.\n If every referrer did this number of concurrent queries we would not have enough capacity\n ', 'name': 'default_concurrent_request_per_referrer', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 60, 'description': 'maximum duration of a query in seconds. Queries that exceed this duration are considered finished by the rate limiter. This reduces memory usage. If you turn this down lower than the actual timeout period, the system can start undercounting concurrent queries', 'name': 'max_query_duration_s', 'params': {}, ...}, {'default': 1, 'description': "number of shards that each redis set is supposed to have.\n increasing this value multiplies the number of redis keys by that\n factor, and (on average) reduces the size of each redis set. You probably don't need to change this\n unless you're scaling out redis for some reason\n ", 'name': 'rate_limit_shard_factor', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'override the concurrent limit for a referrer', 'name': 'referrer_concurrent_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}, {'default': -1, 'description': 'override the max_threads for a referrer, applies to every query made by that referrer', 'name': 'referrer_max_threads_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}], 'policy_name': 'ReferrerGuardRailPolicy'}, {'configs': [{'default': 2560000000000, 'description': 'DEFAULT: how many bytes can an organization scan per referrer in the last 10.0 mins before queries start getting rejected. Cross-project queries are limited by organization_id', 'name': 'organization_referrer_scan_limit', 'params': {}, ...}, {'default': 32000000000, 'description': 'If a clickhouse query times out, how many bytes does the policy assume the query scanned& Increasing the number increases the penalty for queries that time out', 'name': 'clickhouse_timeout_bytes_scanned_penalization', 'params': {}, ...}, {'default': 0, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 1280000000000, 'description': 'DEFAULT: how many bytes can a project scan per referrer in the last 10.0 mins before queries start getting rejected', 'name': 'project_referrer_scan_limit', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'Specific referrer scan limit in the last 10.0 mins, APPLIES TO ALL PROJECTS', 'name': 'referrer_all_projects_scan_limit_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}, {'default': -1, 'description': 'Specific referrer scan limit in the last 10.0 mins, APPLIES TO ALL ORGANIZATIONS', 'name': 'referrer_all_organizations_scan_limit_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}], 'policy_name': 'BytesScannedRejectingPolicy'}] = <WrapperTestResponse 8926 bytes [200 OK]>.json
+ and 4 = len([{'configs': [{'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 22, 'description': 'maximum amount of concurrent queries per tenant', 'name': 'concurrent_limit', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 60, 'description': 'maximum duration of a query in seconds. Queries that exceed this duration are considered finished by the rate limiter. This reduces memory usage. If you turn this down lower than the actual timeout period, the system can start undercounting concurrent queries', 'name': 'max_query_duration_s', 'params': {}, ...}, {'default': 1, 'description': "number of shards that each redis set is supposed to have.\n increasing this value multiplies the number of redis keys by that\n factor, and (on average) reduces the size of each redis set. You probably don't need to change this\n unless you're scaling out redis for some reason\n ", 'name': 'rate_limit_shard_factor', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'override concurrent limit for a specific project, referrer combo', 'name': 'referrer_project_override', 'params': [{'name': 'referrer', 'type': 'str'}, {'name': 'project_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific organization_id, referrer combo', 'name': 'referrer_organization_override', 'params': [{'name': 'referrer', 'type': 'str'}, {'name': 'organization_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific project_id', 'name': 'project_override', 'params': [{'name': 'project_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific organization_id', 'name': 'organization_override', 'params': [{'name': 'organization_id', 'type': 'int'}], ...}], 'policy_name': 'ConcurrentRateLimitAllocationPolicy'}, {'configs': [{'default': -1, 'description': 'Number of bytes a specific org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned_override', 'params': {'org_id': 1}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10000000, 'description': 'Number of bytes any org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 1, 'description': 'Number of threads any throttled query gets assigned.', 'name': 'throttled_thread_number', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'Number of bytes a specific org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned_override', 'params': [{'name': 'org_id', 'type': 'int'}], ...}], 'policy_name': 'BytesScannedWindowAllocationPolicy'}, {'configs': [{'default': 100, 'description': 'how many concurrent requests does a referrer get by default& This is set to a pretty high number.\n If every referrer did this number of concurrent queries we would not have enough capacity\n ', 'name': 'default_concurrent_request_per_referrer', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 60, 'description': 'maximum duration of a query in seconds. Queries that exceed this duration are considered finished by the rate limiter. This reduces memory usage. If you turn this down lower than the actual timeout period, the system can start undercounting concurrent queries', 'name': 'max_query_duration_s', 'params': {}, ...}, {'default': 1, 'description': "number of shards that each redis set is supposed to have.\n increasing this value multiplies the number of redis keys by that\n factor, and (on average) reduces the size of each redis set. You probably don't need to change this\n unless you're scaling out redis for some reason\n ", 'name': 'rate_limit_shard_factor', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'override the concurrent limit for a referrer', 'name': 'referrer_concurrent_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}, {'default': -1, 'description': 'override the max_threads for a referrer, applies to every query made by that referrer', 'name': 'referrer_max_threads_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}], 'policy_name': 'ReferrerGuardRailPolicy'}, {'configs': [{'default': 2560000000000, 'description': 'DEFAULT: how many bytes can an organization scan per referrer in the last 10.0 mins before queries start getting rejected. Cross-project queries are limited by organization_id', 'name': 'organization_referrer_scan_limit', 'params': {}, ...}, {'default': 32000000000, 'description': 'If a clickhouse query times out, how many bytes does the policy assume the query scanned& Increasing the number increases the penalty for queries that time out', 'name': 'clickhouse_timeout_bytes_scanned_penalization', 'params': {}, ...}, {'default': 0, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 1280000000000, 'description': 'DEFAULT: how many bytes can a project scan per referrer in the last 10.0 mins before queries start getting rejected', 'name': 'project_referrer_scan_limit', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'Specific referrer scan limit in the last 10.0 mins, APPLIES TO ALL PROJECTS', 'name': 'referrer_all_projects_scan_limit_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}, {'default': -1, 'description': 'Specific referrer scan limit in the last 10.0 mins, APPLIES TO ALL ORGANIZATIONS', 'name': 'referrer_all_organizations_scan_limit_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}], 'policy_name': 'BytesScannedRejectingPolicy'}])
+ where [{'configs': [{'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 22, 'description': 'maximum amount of concurrent queries per tenant', 'name': 'concurrent_limit', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 60, 'description': 'maximum duration of a query in seconds. Queries that exceed this duration are considered finished by the rate limiter. This reduces memory usage. If you turn this down lower than the actual timeout period, the system can start undercounting concurrent queries', 'name': 'max_query_duration_s', 'params': {}, ...}, {'default': 1, 'description': "number of shards that each redis set is supposed to have.\n increasing this value multiplies the number of redis keys by that\n factor, and (on average) reduces the size of each redis set. You probably don't need to change this\n unless you're scaling out redis for some reason\n ", 'name': 'rate_limit_shard_factor', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'override concurrent limit for a specific project, referrer combo', 'name': 'referrer_project_override', 'params': [{'name': 'referrer', 'type': 'str'}, {'name': 'project_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific organization_id, referrer combo', 'name': 'referrer_organization_override', 'params': [{'name': 'referrer', 'type': 'str'}, {'name': 'organization_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific project_id', 'name': 'project_override', 'params': [{'name': 'project_id', 'type': 'int'}], ...}, {'default': -1, 'description': 'override concurrent limit for a specific organization_id', 'name': 'organization_override', 'params': [{'name': 'organization_id', 'type': 'int'}], ...}], 'policy_name': 'ConcurrentRateLimitAllocationPolicy'}, {'configs': [{'default': -1, 'description': 'Number of bytes a specific org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned_override', 'params': {'org_id': 1}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10000000, 'description': 'Number of bytes any org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 1, 'description': 'Number of threads any throttled query gets assigned.', 'name': 'throttled_thread_number', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'Number of bytes a specific org can scan in a 10 minute window.', 'name': 'org_limit_bytes_scanned_override', 'params': [{'name': 'org_id', 'type': 'int'}], ...}], 'policy_name': 'BytesScannedWindowAllocationPolicy'}, {'configs': [{'default': 100, 'description': 'how many concurrent requests does a referrer get by default& This is set to a pretty high number.\n If every referrer did this number of concurrent queries we would not have enough capacity\n ', 'name': 'default_concurrent_request_per_referrer', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 60, 'description': 'maximum duration of a query in seconds. Queries that exceed this duration are considered finished by the rate limiter. This reduces memory usage. If you turn this down lower than the actual timeout period, the system can start undercounting concurrent queries', 'name': 'max_query_duration_s', 'params': {}, ...}, {'default': 1, 'description': "number of shards that each redis set is supposed to have.\n increasing this value multiplies the number of redis keys by that\n factor, and (on average) reduces the size of each redis set. You probably don't need to change this\n unless you're scaling out redis for some reason\n ", 'name': 'rate_limit_shard_factor', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'override the concurrent limit for a referrer', 'name': 'referrer_concurrent_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}, {'default': -1, 'description': 'override the max_threads for a referrer, applies to every query made by that referrer', 'name': 'referrer_max_threads_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}], 'policy_name': 'ReferrerGuardRailPolicy'}, {'configs': [{'default': 2560000000000, 'description': 'DEFAULT: how many bytes can an organization scan per referrer in the last 10.0 mins before queries start getting rejected. Cross-project queries are limited by organization_id', 'name': 'organization_referrer_scan_limit', 'params': {}, ...}, {'default': 32000000000, 'description': 'If a clickhouse query times out, how many bytes does the policy assume the query scanned& Increasing the number increases the penalty for queries that time out', 'name': 'clickhouse_timeout_bytes_scanned_penalization', 'params': {}, ...}, {'default': 0, 'description': 'Toggles whether or not this policy is enforced. If enforced, policy will be able to throttle/reject incoming queries. If not enforced, this policy will not throttle/reject queries if policy is triggered, but all the policy code will still run.', 'name': 'is_enforced', 'params': {}, ...}, {'default': 1, 'description': 'Toggles whether or not this policy is active. If active, policy code will be excecuted. If inactive, the policy code will not run and the query will pass through.', 'name': 'is_active', 'params': {}, ...}, {'default': 10, 'description': 'The max threads Clickhouse can use for the query.', 'name': 'max_threads', 'params': {}, ...}, {'default': 1280000000000, 'description': 'DEFAULT: how many bytes can a project scan per referrer in the last 10.0 mins before queries start getting rejected', 'name': 'project_referrer_scan_limit', 'params': {}, ...}], 'optional_config_definitions': [{'default': -1, 'description': 'Specific referrer scan limit in the last 10.0 mins, APPLIES TO ALL PROJECTS', 'name': 'referrer_all_projects_scan_limit_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}, {'default': -1, 'description': 'Specific referrer scan limit in the last 10.0 mins, APPLIES TO ALL ORGANIZATIONS', 'name': 'referrer_all_organizations_scan_limit_override', 'params': [{'name': 'referrer', 'type': 'str'}], ...}], 'policy_name': 'BytesScannedRejectingPolicy'}] = <WrapperTestResponse 8926 bytes [200 OK]>.json

Base automatically changed from jferg/migration-replay-discover to master April 18, 2024 15:39
@JoshFerge JoshFerge enabled auto-merge (squash) April 18, 2024 17:51
@JoshFerge JoshFerge merged commit 8ede52a into master Apr 18, 2024
30 checks passed
@JoshFerge JoshFerge deleted the jferg/replay-idp-processors branch April 18, 2024 19:27
@getsentry-bot
Copy link
Contributor

PR reverted: cdfd3ba

getsentry-bot added a commit that referenced this pull request Apr 18, 2024
This reverts commit 8ede52a.

Co-authored-by: JoshFerge <1976777+JoshFerge@users.noreply.github.com>
@JoshFerge JoshFerge restored the jferg/replay-idp-processors branch April 19, 2024 00:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants