-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(generic-metrics): Add generic metrics interface #46957
Changes from 22 commits
a880b35
6332cdc
a195e70
05952a2
8558b04
525049f
2d7b9b6
cf44caf
f342ded
ce1e05c
5fa2aaf
0c21461
565411e
c6032fd
dd51b80
076c2f3
faa5db4
73cb703
5931748
63af343
4fb4c29
f2dc617
b0d389b
1cc5cda
7486191
240f90e
5db95d7
e6e01f2
7f20f17
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
from abc import ABC, abstractmethod | ||
from typing import Mapping, Sequence, Union | ||
|
||
from sentry.sentry_metrics.use_case_id_registry import UseCaseID | ||
|
||
|
||
class GenericMetricsBackend(ABC): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Separate from Filippo's comment, I think on the base class we should have docstrings for each method. |
||
@abstractmethod | ||
def counter( | ||
self, | ||
use_case_id: UseCaseID, | ||
org_id: int, | ||
project_id: int, | ||
metric_name: str, | ||
value: Union[int, float], | ||
tags: Mapping[str, str], | ||
) -> None: | ||
raise NotImplementedError() | ||
|
||
@abstractmethod | ||
def set( | ||
self, | ||
use_case_id: UseCaseID, | ||
org_id: int, | ||
project_id: int, | ||
metric_name: str, | ||
value: Sequence[Union[int, str]], | ||
ayirr7 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
tags: Mapping[str, str], | ||
) -> None: | ||
raise NotImplementedError() | ||
|
||
@abstractmethod | ||
def distribution( | ||
self, | ||
use_case_id: UseCaseID, | ||
org_id: int, | ||
project_id: int, | ||
metric_name: str, | ||
value: Sequence[Union[int, float]], | ||
tags: Mapping[str, str], | ||
) -> None: | ||
raise NotImplementedError() |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,43 @@ | ||
from __future__ import annotations | ||
|
||
from enum import Enum | ||
from typing import Mapping | ||
|
||
from sentry.sentry_metrics.configuration import UseCaseKey | ||
|
||
|
||
class UseCaseID(Enum): | ||
PERFORMANCE = "performance" | ||
RELEASE_HEALTH = "release-health" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The MRI namespaces used for these cases today are transactions and sessions, what's the reason for changing the names ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the true use cases should be Another reason for not doing the pamming. Let's say you map what comes from MRI ( I'd just take transactions and sessions as use case ids and avoid the mapping. |
||
|
||
|
||
# UseCaseKey will be renamed to MetricPathKey | ||
METRIC_PATH_MAPPING: Mapping[UseCaseID, UseCaseKey] = { | ||
UseCaseID.RELEASE_HEALTH: UseCaseKey.RELEASE_HEALTH, | ||
UseCaseID.PERFORMANCE: UseCaseKey.PERFORMANCE, | ||
} | ||
|
||
|
||
john-z-yang marked this conversation as resolved.
Show resolved
Hide resolved
|
||
def get_metric_path_from_usecase(use_case: UseCaseID) -> UseCaseKey: | ||
return METRIC_PATH_MAPPING[use_case] | ||
|
||
|
||
NAMESPACE_MAPPING: Mapping[UseCaseID, str] = { | ||
UseCaseID.RELEASE_HEALTH: "sessions", | ||
UseCaseID.PERFORMANCE: "transactions", | ||
} | ||
|
||
|
||
def get_namespace_from_usecase(use_case: UseCaseID) -> str: | ||
if use_case in NAMESPACE_MAPPING: | ||
return NAMESPACE_MAPPING[use_case] | ||
|
||
return use_case.value | ||
|
||
|
||
def get_usecase_from_namespace(namespace: str) -> UseCaseID: | ||
for use_case in NAMESPACE_MAPPING: | ||
if NAMESPACE_MAPPING[use_case] == namespace: | ||
return use_case | ||
|
||
return UseCaseID(namespace) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this interface is going to produce on Kafka it will be asynchronous. Still you will likely want to allow the client to wait for the message to be acknowledged (callback). How do you plan to support this use case?
You will likely want an interface that allows you to swap the backend (relay or kafka) but still allow delivery guarantee. How will it look like in relay ? Will the call be synchronous ?
There are multiple ways to achieve this:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, to start, maybe we can have an aysnc api and a sync api. I would start working on this while building the Kafka backend, which will be the next PR. Is there anything else specific to this PR that would need to change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should figure out how the async interface looks like (how the client receives the callback, is that a
future
?) at this point as you will need such interface immediately for Kafka.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I am thinking about that for the Kafka implementation. Since this PR implements the
UseCaseID
though, it's a bit blocking for some other work. How would it be if we merged the current state of the interface and, in another PR, revisit it as needed + build the Kafka implementation?