Datastax metrics loading painfully slow through Thanos #7152
Unanswered
koryveeru
asked this question in
Questions & Answers
Replies: 2 comments
-
Would highly appreciate for any pointers on the above issue.... |
Beta Was this translation helpful? Give feedback.
0 replies
-
Please provide more details. What's your scale, active series count? What's the query you run that is slow? How slow it is? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi - we are attempting to leverage Thanos+Prometheus to store Datastax metrics from 2 data centers and provide a global over view in Grafana dashboards. Followed the recommended design pattern/configuration provided in Thanos webiste - but with no luck when loading the dashboard especially when retrieving data from our internally hosted S3 compatible device.
Here is a high level design pattern we have in place:
Here are the config files for each of the components:
Query Frontend cache settings:
type: MEMCACHED
config:
addresses: ["localhost:11211"]
timeout: 500ms
max_idle_connections: 300
max_item_size: 1MiB
max_async_concurrency: 40
max_async_buffer_size: 10000
max_get_multi_concurrency: 300
max_get_multi_batch_size: 0
dns_provider_update_interval: 10s
expiration: 336h
auto_discovery: true
Thanos store cache settings:
type: MEMCACHED # Case-insensitive
config:
addresses: ["localhost:11211"]
chunk_subrange_size: 16000
max_chunks_get_range_requests: 3
chunk_object_attrs_ttl: 24h
chunk_subrange_ttl: 24h
blocks_iter_ttl: 5m
metafile_exists_ttl: 2h
metafile_doesnt_exist_ttl: 15m
metafile_content_ttl: 24h
metafile_max_size: 1MiB
Any experts out there to guide us if we are missing something else here? We really appreciate any suggestions you might have to improve the grafana dashboard performance.
Beta Was this translation helpful? Give feedback.
All reactions