-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposed protocols & implementations for query parameters #780
Conversation
Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made a comment on @DevonFulcher's PR over in dbt-semantic-interfaces that this kind of protocol separation might be good, but in the context of that change it seemed fine to keep on with the current layout.
My default is to prefer strongly typed things to weakly typed things, and having optional parameters to satisfy the interface is, from a design perspective, unsatisfying. That said, I'm not at all sure what the right balance is here because I'm having a super hard time keeping track of all of the interfaces and where these things will be used.
Can we set up a 30 minute chat where we just list out the different query param interfaces, where they're defined, where they get called/how they get invoked, who is accessing them, and how they interact with types (like where filter users get no benefit from typing because they give us a string and we resolve it, but GraphQL callers should get support from the API)? I think if we have that list it'll be more obvious how to lay things out.
@@ -815,40 +812,57 @@ def _get_invalid_linkable_specs( | |||
|
|||
def _parse_order_by( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, this function is terrible. Sorry. :(
Sounds good - I set up 30 min for the three of us on Monday! |
Updated based on our convo today! |
GroupByParameter = Union[DimensionOrEntityQueryParameter, TimeDimensionQueryParameter] | ||
InputOrderByParameter = Union[MetricQueryParameter, GroupByParameter] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-blocking: should these both have the Input
prefix? Or neither?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically I just wanted to differentiate that InputOrderByParameter
is a nested input to the OrderByQueryParameter
. So I followed the naming convention we use for InputMetrics
on derived metrics, if that makes sense! But I don't think that applies to the GroupByParameter
since there's no nesting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
Completes SL-993
Description
Proposed updates to the protocols and implementations used for queries.
IMO, we are using the same protocols for too many things. For instance, we had a
QueryParameterDimension
as the type for thegroup_by
param.QueryParameterDimension
was required to havegrain
, anddate_part
attributes. That means that if the client wants to pass an entity or a categorical dimension into thegroup_by
param, they still need to use an object withgrain
anddate_part
attributes even though those are not valid attributes for entities or categorical dimensions. This leaves us with 2 options:TimeDimensionQueryParameter
). This creates some awkwardness because we have to accept aUnion
group_by
parameter and figure out which type the user passed.In this PR I implement option 2, because to me it seems much cleaner to handle that logic in one place (here in MF) than to put the onus on all clients to handle the awkwardness of passing objects around with invalid attributes.
Second, we were using
DimensionQueryParameter
as the type for bothgroup_by
andorder_by
inputs. This forces us to add adescending
parameter toDimensionQueryParameter
, even though thedescending
param is not valid as agroup_by
input. Instead, I added a new OrderByQueryParameter protocol that includes adescending
attribute and a nestedorder_by
item, which can be aMetricQueryParameter
,GroupByQueryParameter
, orTimeDimensionQueryParameter
.One other note about the
group_by
param - I changed this param to expect aTuple
, since that's the only type hint that allows you to have multiple datatypes in one iterable. (This is just an annoying mypy limitation.)Let me know if y'all agree with these changes or if you have other ideas!