-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
📈 New "Dashboard" tab - Design Considerations #961
Comments
I am actually bumping up against this same issue as I start to boil down the metrics from the server into the formats that So, not a block on my current goal (rough draft of a carbon footprint card) but another place to consider this change, and now that I'm writing it out, we might want to centralize this data mapping for use across cards. |
Intuitively, yes but I don't think we have ever quantified the savings. The general rule of thumb is that as the number of entries that you retrieve grows, collating on the server is going to be much more performant than collating on the client. However, I take your point that it may not need to be exposed to the user. The original goal was just to provide a user-version of the server API. I am open to not exposing this, and choosing it automatically under the hood based on the range selected, for example. Do you have a concrete proposal? |
Sounds good to me.
I think the only reason we even include the average speed in the metrics is because the METs (and thus our new mild/moderate/high exercise counts) depend on the average speed for the active transportation mode - walking at 5mph is very different from an exertion perspective, than walking at 2 mph. And once we got the metrics, we just treated them like all the other metrics. I am not sure that the mean speed is useful outside the context of METs (I never use it), so we could just omit it from the visualizations. |
All in all, this looks great; look forward to seeing it on staging soon! |
The "error bars" project will generate an estimated value along with the range. At least for CO2, that range will be 1 SD. For some of the other metrics, such as distance and duration, it will likely be 1 variance. That is in fact statistically sound 😄 (@humbleOldSage, @rahulkulhalli, @allenmichael099) Also, given that we are redesigning this anyway and are funded by the Department of Energy, we might want to show energy and emissions separately. Right now, given the state of the US grid, energy and emissions are proportional. However, there are significant grid decarbonization efforts ongoing (many of which NREL is very involved in), so once we implement #954 they may start to diverge in a year or so. |
Until we have the estimated values available to use, maybe for now we can use stacked bar charts to show uncertainty. The range between the "low" and "high" estimates can be shown in a lighter color, or potentially with slashed lines, to represent indeterminacy. This page illustrates the kind of visualization I mean: |
I am not sure what to do about getting these "low-", "moderate-", and "high-intensity" active minute counts. To do this correctly, we would need trip-level information, not just day-level or week-level information. If I went for a 10-minute sprint this morning and take a 30-minute stroll this evening, we should expect that to count as "10 minutes high intensity" and "30 minutes low intensity" However, the only information we have access to is the total walk distance/duration/speed for the entire day. So the best we can do is say which days were high/moderate/low intensity. To get the desired result of "10 minutes high intensity and 30 minutes low intensity", we would need server-side changes. |
"Average speed" needs to be handled differently because it is not mathematically correct to average it across days without considering differences in distance and duration between those days. (described in e-mission/e-mission-docs#961) We can comment this out; maybe revisit later
Looks like it is coming along great! Is there a reason you opted to show this chart vertically? Our wireframes from before had these in a horizontal layout. I believe we made that choice in consideration of the "meter" metaphor that we were trying to convey. But if you think the vertical layout is better, I'm happy to have additional discussion and weigh the pros and cons.
I think so. The target lines are the most important point of reference - they give meaning to all the other measurements. |
I think I was just most familiar with vertical charts, so I started there. I just flipped it and looking at the goal lines that way I think we should find a way to color the lines to covey that "less is more" here, my first instinct when I saw the 2030 goal off to the right was that it would be "better" if my bars were closer, which is not the case.
I hadn't thought about it that way, but that totally makes sense. I'm curious to see how the interactivity to see the values of the bars works on a real phone, I think having that pop up saves us from the "where'd it go" concern that I initially had. |
I think coloring the bars or a meter across the bottom (2nd or 3rd bullet) could be good. With the uncertainty, I think if the uncertainty pushes us past a goal it's fair to change the bar color, and this could serve as further motivation to label? I also like the idea of icons if we need them, we could even add those to the lines themselves, potentially marking them better as thresholds? Not sure how crammed that would make those line annotation labels though.
|
Seeking advice on how to proceed with this. I think it would require server changes, so is it even worth implementing right now? Maybe we should revisit it later? Is there a suitable substitute we can implement in the meantime? |
It is very dear to my heart but I think we should hold off on it for now. Couple of options:
The classic travel behavior drivers are cost and time, and we have time (although maybe not super visible), but no cost. Both of them could start with a basic value/PkmT and just multiply and combine to give the value. |
I went ahead and tried the toggle solution to this issue, and I think it works nicely, but I'm open to other suggestions and feedback! Simulator.Screen.Recording.-.iPhone.13.Pro.-.2023-08-29.at.17.24.13.mp4 |
It's a bit unclear to me what the 'group' option represents here. Is that the cumulative emissions for the entire group? Or is it representing the 'average user' in the group? If it's cumulative, it doesn't make sense to show the goals there because those are on a per-capita basis. If it's 'average user', I would rather see them stacked up against my own emissions. With me on one tab and 'average user' on another tab, it's hard for me to compare and see if I'm doing better than average. |
It's supposed to be average, but some of the numbers (that I've seen in staging/production) don't make sense to me. "Average for Group" is the label now, and across my phones the values are:
I agree that this is the probably the most reasonable way to present "aggregate carbon". I'll test a few of my opcodes in the morning and check on the data at different points in the process to make sure that the intended result (average user) is what's actually happening, or fix it if I find the average is getting lost somewhere. Assuming we confirm the metrics are being averaged, and nrel-commute remains an outlier, would a condition to omit that bar if it's too high make sense? I'd think a cap of 3x the 2030 goal or the user's average (whichever is bigger) might make sense here, to keep the focus on the user's choices over the collective. |
Here's something I found: https://stackoverflow.com/a/70377422 It looks like we can set <Bar ref={barChartRef}
data={{datasets: chartData.map((e, i) => ({
...e,
// cycle through the default palette, repeat if necessary
backgroundColor: (ctx: any) => {
console.debug("ctx", ctx);
if (ctx.raw.x > 100) return 'red';
return defaultPalette[i % defaultPalette.length];
}
}))}} We can get |
Coloring the dotted lines is great! It's small but I really think that does helps a lot. I'm not sure if the emojis are as effective though. |
It would be really cool if we could get the bars to "bleed" into red as they approach the 2030 goal, like this (but horizontal): Or, this example where the gradient covers the full spectrum of green-red: |
That looks cool! I'll mess mess around with it when I get the chance, maybe there's some way to show the gradient + stacked to maintain the uncertainty? I think we need to keep the distinction between certain & uncertain (or labeled and unlabeled). I thought about doing the background as a green -> red gradient, but thought that might be visually overwhelming and hard to maintain the goal lines as color transitions rather than allowing the gradient to take up the entire graph. |
Would it be ok to hold off on these for this release cycle so we don't get bogged down on the rest of the rewrite? One easy thing we can do right now, with the metrics we already have, is to show daily active minutes for the past 1-2 weeks (likely as line chart(s)). Although not as rich as a breakdown by intensity, it does at least show the data in more granular chunks and give the user more things they can explore about their data. This way, we would have weekly active minutes on the front page, and swipable to right would be daily active minutes. |
I am fine with dropping the leaf. Carousel sounds good in principle, would be good to see what it looks like |
e-mission/e-mission-docs#961 (comment) -> implementing this suggestion isolate the text to a dedicated card, and place the "meter" card and "text" card in a carousel, now we have three rows, each a carousel. also isolated data management functions shared across the two cards into `metricsHelper.ts` The two cards keeps information easily accessible to those using a screen reader, while maintaining focus on the "meter" card and not cluttering the screen
Here is what I would suggest then
@Abby-Wheelis If you have time, else I'll try it out later because I'll be working several hours tonight |
I made your abbreviation suggestion in |
in an effort to give some more space to the chart itself e-mission/e-mission-docs#961 (comment)
I don't think I ever figured out where |
I believe that If you look at the code just below that, we support both So here's where we read the data and then we and then we just find the number of users (in |
I looked into the carbon footprint values, and I am seeing some differences especially on user footprints for a given week, but I haven't figured out exactly what is happening. I checked the I think there might be a chance that the dates are fetched differently between the two implementations, but I need to keep walking through what's happening in the emulator tomorrow. The other thing I can think of is that somewhere in the summation of the distances there is an error, so I plan to examine that process tomorrow as well. |
The left is the devapp running on a real Android phone. The right is the devapp running on an iOS Simulator. I can't think of any good reason for these to yield different metrics |
^I agree that it's taking some digging, so far I've found one different between old and new: The default calls seem to be different between what's currently in production and the new dashboard, when I opened them both just now, production is showing Sept.8-Sept15, but the metrics used to populate are only the 1st through the 13th (13 days, not 14?) while the new dashboard pulls Aug 31st through Sept14th (15 days of data) by default. The extra day is trimmed off when the data is divided into weeks (31-6 and 7-13). However, when I set the dates to a single week on my phone (I can't alter the dates on production in the emulator) the numbers are still off by 20-30% between production and the new dashboard, which is very significant. My next step is to hand-calculate what's on the new dashboard based on the data it's using, maybe something got lost in the math when I was re-writing the formatting functions |
Good plan. If the old dashboard is not a reliable source of truth (it seems like it might not be), I think we can use hand-calculations as a better ground truth to compare the new dashboard against |
I'm not sure what you meant by this. If you meant Sept 1st through the 14th, that is 14 days. |
oops! yes, Aug 31st. I had noted that the |
I just added up all the displayed distances by their labeled modes for the same two weeks (8/31-9/14) by scrolling back through my labeled trips and got: drove alone: 152 miles, shared ride: 3.8 miles, walk: 6.6 miles, bike: 13.4 miles -- the three miles different on the "drove alone" can be explained by the fact that I was using the displayed mileage on the label screen and had lots of trips by that mode - so I believe 3 miles of rounding error. And, for the record, the old dashboard shows the same distances within about 0.3 miles. So I'm now pretty convinced that the calculations are right -- I've checked the distances used for footprint against the actual trips and stepped through the footprint calculations, everything is checking out against what I get by hand. |
Looking into the shared mode, and I think I found the explanation for the discrepancies: when calculating the footprint, the mapping of modes to values is retrieved through a method I'll start working on a fix to ensure that the custom footprint is used, when needed, in the new dashboard. |
Good catch - that's a mistake I made in e-mission/e-mission-phone@5fcc5d4 I was thinking that footprints were tied to base modes, but they are actually specific to each rich mode because we list the |
we had figured out that there were some differences e-mission/e-mission-docs#961 (comment) Eventually, we realized this was because the new dashboard was not using the custom labels. This commit adds the methods that check to see if the labels are custom or sensed to `metricsHelper`, checks for custom labels and indicates the need for custom footprint mappings in `CarbonFootprintCard` and the finally reverts back to "rich modes" rather than "base modes" in `metrics-factory` so we use the custom labels
I needed to add code to handle deciding if we used a custom dataset, and also reverted back to using the rich modes rather than the base modes. The production and dashboard now match a little more. If I select the same date range with the same opcode with the new dashboard in the emulator and on my phone on production, the carbon values now match when the mileages by mode match (I've been comparing to compensate for date ranges getting picked differently. For example, 8/29 - 9/11 shows 21+28 on the new dashboard and 49 on production. The "taxi" values are hard to compare, since we show the whole number now rather than "savings" to stay consistent with the meter. The "group" values still vary a lot between dates, and between production and the new dashboard, so I'll dig into those more next. |
I've stepped through a group calculation, and confirmed that the custom footprint is now used for the group as well as individual users, and that the metrics are averaged by dividing the total distance for a mode in a day by "Taxi" values seem to align well, as [week total on new dashboard] + [taxi savings on production] = [if all taxi on the new dashboard] over a given week, which is what we would expect. |
New dashboard is now merged into |
A couple months ago, we discussed in #922 and created some wireframes
Now that Abby and I are implementing this Dashboard rewrite in e-mission/e-mission-phone#1018, I am starting a new issue to continue discussion.
These are the wireframes from #922, copied here for convenience:
Here are some initial drafts of the implementation:
Some things to note:
There are a few things we need to stop and consider.
Daily / Weekly / Monthly / Yearly interval
The old Dashboard has options to change the basis of time on which these metrics are represented:
Concretely, what does this do?
Do we need to support it? Why not just fetch the data on a daily basis and segment it into week / months that way? Does that put extra stress on the server?
Active minutes
The wireframes showed active minutes per day as a chart. However, the CDC recommendation is on a weekly basis (150 minutes, moderate intensity)
I think that weekly goals are generally more appropriate for this, so I am suggesting that we pivot to a simple comparison of "past week" vs "previous week", each of these with stacked 'walk' and 'bike' totals.
Then, we can put the target line at 150 and visually see whether, between your cumulative active minutes, you reached 150.
Then, I think we should have a separate card to the right of this (swipable to by carousel) that breaks this down by (i) high-intensity, (ii) moderate-intensity, (iii) low-intensity.
Average speed
We receive average speed metrics from the server. These appear to be the average speed per mode per day.
So if on Monday, I walked to the bank at a speed of 4mph, and return at a speed of 2mph - my average for Monday is 3mph.
Then on Tuesday, I walk to the store at 3mph and return at 5mph - my average for Tuesday is 4mph.
But to get my average across both days, I don't think we can just take these two figures (3mph and 4mph) and average them together to get 3.5mph.
Because what if the walk to the store was 20 minutes, while to the bank it's only 10 minutes?
Then mathematically, my average speed between those days was not 3.5mph, it was greater than that.
The proper way to calculate my Average walking speed for Monday and Tuesday would be to find my total walking distance on those days, divided by my total walking duration on those days.
Those are two metrics that we already have - so I don't think we actually have any use for the speed metrics that we get from the server.
The text was updated successfully, but these errors were encountered: