From afc1eacf01c5374a1a55cdd6ae00f82fa291d4b8 Mon Sep 17 00:00:00 2001 From: Ashley Claymore Date: Mon, 21 Aug 2023 13:32:37 +0100 Subject: [PATCH] Add notes for July 2023 --- meetings/2023-07/july-11.md | 1314 +++++++++++++++++++++++++++++++ meetings/2023-07/july-12.md | 1479 +++++++++++++++++++++++++++++++++++ meetings/2023-07/july-13.md | 1243 +++++++++++++++++++++++++++++ 3 files changed, 4036 insertions(+) create mode 100644 meetings/2023-07/july-11.md create mode 100644 meetings/2023-07/july-12.md create mode 100644 meetings/2023-07/july-13.md diff --git a/meetings/2023-07/july-11.md b/meetings/2023-07/july-11.md new file mode 100644 index 00000000..534ed5a6 --- /dev/null +++ b/meetings/2023-07/july-11.md @@ -0,0 +1,1314 @@ +# 11 July, 2023 Meeting Notes + +----- + +**Remote and in person attendees:** + +| Name | Abbreviation | Organization | +| ------------------- | ------------ | ----------------- | +| Waldemar Horwat | WH | Google | +| Bradford C Smith | BSH | Google | +| Jack Works | JWK | Sujitech | +| Daniel Minor | DLM | Mozilla | +| Eemeli Aro | EAO | Mozilla | +| Michael Saboff | MLS | Apple | +| Ashley Claymore | ACE | Bloomberg | +| Peter Klecha | PKA | Bloomberg | +| Jesse Alama | JMN | Igalia | +| Jonathan Kuperman | JKP | Bloomberg | +| Daniel Ehrenberg | DE | Bloomberg | +| Rob Palmer | RPR | Bloomberg | +| Philip Chimento | PFC | Igalia | +| Samina Husain | SHN | ECMA | +| Istvan Sebestyen | IS | ECMA | +| Linus Groh | LGH | Invited Expert | +| Ben Allen | BEN | Igalia | +| Nicolò Ribaudo | NRO | Igalia | +| Ujjwal Sharma | USA | Igalia | +| Chip Morningstar | CM | Agoric | +| Lenz Weber-Tronic | LWT | Apollo GraphQL | +| Martin Alvarez | MAE | Huawei | +| Willian Martins | WMS | Netflix | +| Sergey Rubanov | SRV | Invited Expert | +| Chris de Almeida | CDA | IBM | +| Michael Ficarra | MF | F5 | +| Luca Casonato | LCA | Deno | +| Kevin Gibbons | KG | F5 | +| Ron Buckton | RBN | Microsoft | +| Christian Ulbrich | CHU | Zalari | +| Tom Kopp | TKP | Zalari | +| Mikhail Barash | MBH | Univ. of Bergen | +| Jordan Harband | JHD | Invited Expert | + +## Introduction + +Presenter: Ujjwal Sharma (USA) + +USA: Thank you for having us in your city. A quick introduction. We are here in person, CDA will be [inaudible]. Justin who is one of the facilitators and Brian, will be also make the rest of our facilitator group, they are on the – but yeah. + +USA: Make sure that if you are here in person, or if you are part of this meeting, if you have signed in, we have the entry form is important us to track our registration. Before we go ahead, you would like to remind you about the TC39 CoC and there is a – which you should reach out if you have anything to report. You will be hearing more about this within the course of this meeting, but yeah. Please keep this in mind. Regarding the schedule for the upcoming days. 9:20 is when this area is accessible, [inaudible] breakfast, the meeting will start every day is the 10. There’s is lunch mid-day at 12. The meeting would resume at the usual. The end times are flexible. But they’re not the only [inaudible]. There are a few special events that might be interesting for you. + +USA: The first one is today. It’s newcomers event. It’s sort of Q and A session here. Please meet us for this – the next [inaudible] The next is the workshop the language design [inaudible] we hope to repeat . . .[inaudible] and amazing job organizing everything, also the upcoming workshop. + +MBH: The workshop will have three presentations: one about introducing genetics in Fortran, one about co-evolution of languages and their Integrated Development Environments, and the third one is about formalization of the programming language standardization process, followed by a discussion with the delegates about that. + +USA: Thank you so much for organizing this. Tomorrow we will have [inaudible] social dinner. It’s a 5-minute walk from here. +[inaudible] We will go over the tools we use again. Remember, though, we are hybrid, there are a number of our co-workers who are signing in and [inaudible] and keep the conversations and use the queue. Microphones are in the room and always on. So we might go off what you say and try to reduce side conversations. And feel freely to inform us in there’s something wrong with the communications + +RPR: Yes. We have five mics around the room. [inaudible] we ask you to pull it nearer. + +USA: And a quick overview TCQ. You go to the queue for the current item that showses the item, the topics who is speaking right now and add yourself to the queue using any of these buttons. They are [inaudible] a point of order, for example, when something . . . like an emergency or note-taking, clarifying question would be clarifying the existing topic than you can discuss the current topic, add yourself to the queue and add new topic. Here are the speakers here. It’s slightly different in that there’s a button that says I am done speaking. Please use this button whenever you’re done speaking. So we don’t have to manually do that. And last, we have a matrix chat for discussing [inaudible] you have been using the TC39 delegates chat for discussions and for all topic items you can feel free to chat about that. And there’s TC39 space that contains all of these, including a check [inaudible] but we have also a social chat for the people in person in Bergen, so that’s also in the space. Regarding the IPR policy, we are all either member delegates or invited experts. If you are an invited expert, and you have not signed the DG form, please do. There’s further details for everyone in the contributing in 262. But please make sure that aware of the IPR implications for participation. Then we have the notes right now we have the transcriptionist helping out with the notes. We require your assistance to fix up the notes while they are being typed up. So I guess let’s do a call for note-taker later. Quickly talking about the next meeting, which is also hybrid, and being hosted by Bloomberg in Tokyo. That’s exciting. Remote attendees that in Japan standard time. So exciting And okay. So now let’s move on with the housekeeping. I think we can move on with the secretary’s report, if ready all right. + +RPR: Just to confirm... any objections to the past meeting’s notes being published? No. And any objections to current agenda? No again. + +## Secretary’s Report + +Presenter: Samina Husain (SHN) + +- [slides](https://github.com/tc39/agendas/blob/main/2023/tc39-2023-032_Final.pdf) + +SHN: Thank you very much for the excellent organization to our host at the university. We already started off well. It’s a pleasure to meet everybody. I see the names online, this is my second meeting. So I am Samina Husain from the Secretariat. Together we will do a presentation today, and I think there are changes in how we do the presentation and information. And feedback. Let me go through this a bit to give you an overview and happy to have questions. All right. IS next to me and myself, we have a few changes to the agenda. There’s a bunch of information in the annex. Read it for the areas that you’re interested in. What I will do is go over the first few bullets, what I have not mentioned is some of the work that’s going on in the background with other mem bers that we are trying to develop more projects with, and look for new work items and this is where your feedback and input is important. And also, how we can support the TC39 team. + +SHN: So first an update on this slide. Next slide. So you see that the approval has taken place. We have the GA, the general assembly on the 27th – 26, 27th of June, two weeks ago in which the two standards listed below, ECMA-402 10th edition and ECMA-262, 14th edition were approved. Congratulations and thank you for your efforts. I believe that for ECMA-262, if there any changes, let us know as soon as possible to be adopted. We understand that is minor, but thank you For a new Ecma member, and I believe that Oramasearch is online, so welcome. We have a new member active in the group. They have signed the membership and release forms. They’re all available in the documents. There are a new company founded in 2023. They are also a part of this group. + +SHN: This is a little bit of an update. I have talked to IS about putting the minutes together. One thing that is important are the short summarize. Each of you about present your topics in specific areas that are relevant. These topics many of on to stage 1, 2, 3 and 4. The minutes well-addressed. If you can, please do the summary and the conclusions at the end of your presentation, your topic discussion, it helps us to make sure we have accurate minutes. It’s been working well in the past, and I will do start to do this more myself. We were missing a few contributions and I can also produce accurate reports which is beneficial for you also + +SHN: A topic that comes up a lot and will continue to come up and we will find a solution and I think there are steps forward to finding that solution is that for the ECMAScript standard, we do need to have a clean PDF version, a nice one. We have had some discussions in how to best do it. We have been supportive in helping us get to that stage. There are rules and guidelines that we need to do for Ecma International. So it would be helpful to provide a solution. AWB has provided some information. AWB will not be able to do it in the future. Which is next year. I believe there will be conversations with one of the members of TC39 who have already tried to do something. But it would be good if we could find a solution. And if it’s not the one AWB has suggested, perhaps you have a better one to bring and we will review that. Something to think about and to look for a solution in the future. So I will sort of touch on this as we move on. + +SHN: This is a reminder. The next slides are about reminder of where we are with the five-year review on the fast track and I can ask to help with this. This is new to me. I’ve been here three months. The five-year review process is relevant to the end of September. Maybe IS you can give more information on that. + +IS: We have two standards. One is for JSON and that is encouraging. And this has been confirmed at the ISO level. That’s already through. That’s what went through on the 6th of March this year. But the other one, and this is quite important for us, is the two or three pages long ECMA-414. This is the ECMAScript Suite. This contains normative references of our latest ECMAScript standards for ECMA-262 and the ECMA-402. They can stay royalty free in the Ecma territory. The problem is they have only a RAND pattent policy and so if we fast track a RF standard to ISO the standard looses there its RF status. This is one of the problems that we have to deal with the and other one was the speed of fast track approval at the iSO sode. If you are fast-tracking in ISO, they cannot follow the 1 years’ cycle we have, they need more time. So this “trick” has been found, with the RF normative reference in Ecma. I think, this was in 2016 or ‘17 and now the five years review is on. It is very important for us that we can continue with this, also for the next five years. + +IS: Now, whole thing is on this agenda for ISO/IEC JTC1 SC22 and on the second of September this year the vote on the ECMAScript Suite reconfirmation ends. And the question; if you have connections to the ISO national bodies who are working in SC22 please try to tell them this is a good thing, et cetera, et cetera. So in Switzerland, we have done this and now one of the national bodies is made aware of this. So you will get one positive vote from them, but it would be nice to get it also from UK, from North America, Canada, et cetera, et cetera. If you have any sort of connections, please use it. So that was it. + +SHN: Thank you. So I am going to jump back to the first slide that I have because that’s is what I want to just go through. We talked about the I am approval of your standards, which is excellent and that’s moved on. We talked about the new member. The short summary on the contributions for the minute, as a reminder, we talked about the PDF and the voting. I would assume that the larger organizations typically do have a relationship with your national body. And if that can be something to access through the head of regulatory or legal affairs to vote positively for the five-year that is great. September 2 is the deadline. + +SHN: In the annex are the list of relevant documents for TC39 and the GA. You can access that through the chair or all TC39 documents. I will speak to one of documents to give you an update. There is status of the TC39 meeting participation so you can see how it’s been going. Also, the download and statistics. I would like to repeat back with better information you would want to see regarding the download and statistics. I am also listed on the annex and the next meeting not only for TC39, but the GA because it is important that you know those milestones if you meet certain approvals. The next one is in October and December. We know that’s an approval process. The discussion point, the chairs on there. But for your information, those are on the annex and you can see those slides are available. I will pause right here. Are there any comments or questions to what we discussed so far? + +DE: I want to repeat IS’s and SHN’s call for walks to encourage the ISO level of the ECMA 414. At the same time, previously, it’s been noted in this committees that most don’t have such connections to national bodies. How much do we see the renewal at risk, the level of investment to develop those connections? Is this an urgent risk? + +IS: I can only answer about what my feeling is. So with the JSON standard, is it very, very popular also in ISO. It was much, much easier than I thought. But I am also positive that this one too, it will go through. So I am absolutely not negative. I think it will go through. That’s my personal feeling. + +DE: Great, thank you. + +SHN: If I could add to that. Yes, IS, assessment of that is positive. But we know that ISO can make changes over time. We should find out how relevant it is, so be prepared in the next cycle, that could be five years, we should be aware where our risks are. But still, we are positive this will go through. And to answer your question, maybe you need to investigate that. + +MF: I was a bit confused on the topic about summaries. You mentioned that we do now record conclusions at the end of each topic. Can you clarify what additionally you expect from us? + +SHN: That were two areas of summaries that were missing. So I would say 80% of the summaries and conclusions that are there that are incorporated into the minutes and some missing. I think they aren’t as expert as all of you to make up summary. We are missing some. + +MF: We should be more diligent about that. + +IS: Yes. Another thing, for one presentation we were not available to collect the slides. We are always looking for the official TC39 website, collect all of the slides of the presentation and also any other relevant information that is related to the contribution, we are putting into it also on the official TC39 Fileserver in Ecma. For one presentation, I was unable to get the slides. So it would be nice if we had all complete. The other issue, I have to made myself the summaries for some of the presentations (bacause of lack of them), I hope it did not turn into a disaster… + +MF: Were you sourcing the slides from the notes or agenda? + +IS: The slides from the Agenda. The summaries and conclusions from the Notes. And generally it worked well. But sometimes we didn’t get the one paragraph summary for the presentation for a topic. So generally, we were happy. + +DE: In the last meeting, some of the summaries were sufficient. Maybe should consider merging them or making key points and conclusions, you know, kind of apart of the summary. As I initially proposed. And show anyway, if the secretariat had statements to the summary, please report this or trouble finding slides, please report this on GitHub. Most people, the vast majority of people who read the TC, the GitHub repository. If you have notes reflected there. There’s potential for widely distributed. I encourage you for slides. [inaudible] not only to members of ECMA, as well as – that you do work to archive slides. It would be good. In GitHub instead of a file. And also, the summaries they can be posted on GitHub, that would be very useful. If you need to write summaries in the future. You can also ask us to do summaries + +IS: The message, “on purpose”, the summary, it’s important information for us. Yes. It would be very good, you know, somehow to include this somehow. + +DE: I think if we switch the format to say summary and key sections in that. Key points and conclusion. That would be much more clear and the – but we can talk about this later. We don’t have a specific format right now. + +SHN: Thank you for that. It’s making that more concise would be helpful and about if missing we will more mechanisms to reach out to get that + +DE: GitHub comments to reach out on the notes PR because we always put the notes PR, that communicate the summary to you, the notes + +USA: All right. So that’s all for the queue. + +SHN: What I don’t have on the slides here, and I just wanted to give a short update because I think it’s important . . . TC39 is our most active and biggest technical committee, I think you all know. This is very important to us. I also need to look at Ecma in a broader scope and one the activities I have been trying to do – there are many that I have been trying to do for the last three months, but to reach out to see if we can find other organizations to bring if projects that need to go towards a standards channel, where Ecma can bring the platform and have a – I think flexible way to bring out the standard. Looking at the work from the technical committee here. An example is the TG4 that will be discussed later in these meetings and looking forward to being formed. + +SHN: Also, about new members. I mean, we are a members-based organization, that’s how we get funding, more members is also relevant. I am doing a bit of outreach and understanding the landscape. I have initialing meeting set up and you way have been aware that Ecma has contacts with the – there are no decisions, nothing different planned. Just a conversation. It’s not starting up. It’s got to be different. Potentially, another meeting at the end of July to see if we can open the conversation, by just bringing in one project into ECMA from the foundation that may be interesting for the members here does some new members. In addition to that, I had some conversations with DE about work with other opportunities. I am asking you to give me feedback where it’s beneficial to bring this thoughts. That’s one space of ECMA further. + +SHN: Another space of ECMA that I am supposed to be involved in, the technical committees we do have, you have the tools and able to move forward. I also want feedback there. I think there are things we are doing, it may not be enough. Maybe it’s not the right things we are doing. But I need the feedback to work better within my boundary conditions and what we have to give you that support. And the third space I will be very involved in is running the secretariat. I see all of you are members. You are involved in the technical level, but there is a secretariat in the background showing the documents are available, printed that we have the relationships with the different bodies like ISO, ITU and others. We have a small team. Patrick Luthi, Patrick Charollais, and Isabell Watch, there’s Istvan, supports us and others. That’s also I need to look at. Just bear with me as the next months ago as I go take this up. Importantly, I need your feedback, what you want from the secretariat's report what is important for your ear. Take the time to speak with me here, on the queue, or on the email or on the – I have all the channels. Just let me know. If I know, I can better address your needs. Okay. + +EAO: Is the conversation that’s now happening and upcoming with the LF with the whole of the Linux foundation or some part of its activities? + +SHN: The conversations with the overall management of the Linux Foundation, but my proposal to them was what did they discuss in the past may not be the way to move forward or isn’t the way to move forward. So one project, we can just test and see how to break each other together. I don’t have the project yet and that’s my hope in the next meeting, is there one. + +DE: I wanted to note that one area we have been looking into at the ECMA level is cloud computing. And in particular, standards that can help guide regulations around cloud computing. Another area I have been looking into is software supply chain security and deepening collaboration with W3C and the Unicode consortium. If you are interested in any of these, let us know. And if you have other ideas for projects or ideas for how ECMA could support TC39, yeah. Put it on the queue or let her or anybody else know afterwards. Thanks. + +SHN: Thank you for that feedback, DE. DE brought up a meeting for cloud computing. We had to cancel that for others and we will reschedule that. If there is a subject that TC39 sees, we can do workshops or conversations to develop that further and get the idea. So very open for that. We could do so much virtually and the face to face meeting for TC39, but maybe we can think about a conversation for the meeting in Japan. That gives us time to develop it to take to the next step + +Summary: Ecma Secretariat presentation: + +The slides were reviewed, and suggested to read the documents of interest as noted in the Annex. Congratulations: Standards approved by GA 27 June and posted on website: ECMA-262 14th edition – ECMAScript® 2023 Language Specification ECMA-402 10th edition – ECMAScript® 2023 Internationalization API specification + +-For any editorial changes to the two approved standards, please advise ASAP to the Ecma Secretariat (Samina Husain, Patrick Charolais, Allen Wirfs-Brock, Istvan Sebestyen). +-New TC39 Member Oramasearch approved, announced and welcomed. +-For the TC39 meeting contributions, the summary notes from each contributor are very relevant and are requested to be added to the meeting notes. These summary notes ensure accurate meeting minutes. +-For the ES2023 «Nice PDF» Versions next steps will be taken to find a solution for 2024. +-Status & Reminder: JTC1 Periodic Review of fast-tracked TC39 Standard ECMA-414 (ECMAScript Suite). Please Vote, if your organization is engaged through your national body. + +In addition to the slides a short update was provided on the broader scope of actives which the SG has been involved in over the last three months, such as reaching out and exploring other organizations, i.e. with W3C and the Unicode consortium, in order to bring in projects that need to go towards standards, where the Ecma platform can bring value. + +Initial meetings and contacts with the Linux Foundation (LF) have taken place exploring how to work together, at this time no decisions have been made. Potentially another meeting at the end of July. + +TC39 committee feedback is requested, what items are relevant for the secretariat's report? Where can Ecma collaborate and build partnerships? + +USA: All right. Thank you to our secretaries. That’s all for this item. + +### Summary + +The slides were reviewed, and suggested to read the documents of interest as noted in the Annex. +Congratulations: Standards approved by GA 27 June and posted on website: + +- ECMA-262 14th edition – ECMAScript® 2023 Language Specification +- ECMA-402 10th edition – ECMAScript® 2023 Internationalization API specification + +- For any editorial changes to the two approved standards, please advise ASAP to the Ecma Secretariat (Samina Husain, Patrick Charolais, Allen Wirfs-Brock, Istvan Sebestyen). +- New TC39 Member Oramasearch approved, announced and welcomed. +- For the TC39 meeting contributions, the summary notes from each contributor are very relevant and are requested to be added to the meeting notes. These summary notes ensure accurate meeting minutes. +- For the ES2023 «Nice PDF» Versions next steps will be taken to find a solution for 2024. +- Status & Reminder: JTC1 Periodic Review of fast-tracked TC39 Standard ECMA-414 (ECMAScript Suite). Please Vote, if your organization is engaged through your national body. + +In addition to the slides a short update was provided on the broader scope of actives which the SG has been involved in over the last three months, such as reaching out and exploring other organizations, i.e. with W3C and the Unicode consortium, in order to bring in projects that need to go towards standards, where the Ecma platform can bring value. + +Initial meetings and contacts with the Linux Foundation (LF) have taken place exploring how to work together, at this time no decisions have been made. Potentially another meeting at the end of July. + +TC39 committee feedback is requested, what items are relevant for the secretariat's report? Where can Ecma collaborate and build partnerships? + +## ECMA-262 status updates + +Presenter: Kevin Gibbons (KG) + +- [slides](https://docs.google.com/presentation/d/1v5pcXHdJDtTj1_q9fncoUnY2GezpvLOKAYEhxkyjMc8) + +KG: We have had a small number of editorial changes that we considered worth calling to the committee's attention. The first is this refactor from Justin Grant (JGT) of the handling of TimeZone identifiers spec. This shouldn’t affect anyone directly. But in preparation for both Temporal and Justin’s TimeZone canonicalization, there are some of the internal machinery and the TimeZone identifiers. + +KG: The second thing, #3058, is basically making it so that realms are associated with precisely one agent. Previously that wasn’t explicitly specified, but sort of implied because nothing actually works otherwise - if realms are capable of moving between agents it’s incoherent. + +KG: Discussion of changing the body font. Michael, do you want to talk about this? + +MF: I don’t think this is a big deal. But we are going to change the font to something that is a little bit more legible. We also have noticed that not everybody experiences the spec in the same way because all of the fonts that were specified were, like, system-specific. Windows users would see it differently from Mac users and so on. So we see the specs and improve the legibility a bit. Trust me, we reviewed some fonts and this is the best we could come to an agreement on. But you mostly probably will not notice or care. + +KG: Yeah. So concretely, the current body font is on the left. We will be switching sometime soon to the font on the right. As Michael (MF) says, you probably won’t notice much difference, but do note there is a slash through the `0` on the right-hand side. Hopefully that will be a little nicer. If anyone objects to this, please let us know, but otherwise it will switch sometime soon + +KG: Okay. Then we have landed a surprisingly large number of normative changes since the previous meeting, including the `v`-mode regexes, well-formed Unicode strings, atomics.waitAsync, and a limit on the size of ArrayBuffers. Then this last thing was technically normative, but a bug fix, hasCallInTailPosition wasn’t defined for import calls, such that if you were returning an import that was considered a tail call, which obviously it wasn’t. So we didn’t come to the committee for consensus on that because it didn’t reflect committee intent, what was there previously And then I don’t think we have had notable changes to the upcoming work for a while. So I am not going to talk about it. I also have not captured here, but we are in addition to sort of generally increasing consistently in terminology, we are hoping sometime soon to start documenting those better. So some of the terminology is automatically enforced, but a lot of things are sort of just processed knowledge or institutional knowledge of how it’s done and it’s not written down yet, but hoping to soon and the last is to note that ES2023 was approved by the ECMA GA as mentioned previously. So that’s all we have for the editor update. Anything from the queue? It doesn’t look like it. So . . . okay. Thanks for your time + +DE: Okay. Let’s do a call for comments. Do any of you have any comments? + +USA: I can briefly mention that I really like the IBM flex family. Thank you for choosing that. All right. Then. + +### Summary + +ECMA-262 update was provided. It was confirmed again that ES2023 was approved by the June 2023 ECMA GA. + +TC39 expressed its satisfaction with the work of the ECMA-262 editors. + +Work on ES2024 continued, there were a small number of editorial changes that were considered worth calling to the committee's attention. + +### Conclusion + +TC39 took note of the report. + +## ECMA402 Updates + +Presenter: Ben Allen (BAN) + +BAN: Cool. All right. I have got a very, very short slide show. We have just a handful of smaller editorial updates, including one not on the slideset. Are you visible on the share screen? Sharing problems. It’s probably best to get straight through them instead of sharing. We have a couple of different editorials . . . The one that is manier meaningful is the explanatory note on usage of search collation. With that is – so there is a type of colonelation used for string only. It has different roles for strings with diacritical marks and we added a note to say it should not be used for sorting since it’s not guaranteed to be in any particular order. There is a really small one. As part of cleaning up, we are regularizing all of the references to UT35 and added references or detailed blanks. And one that’s not on here, we also merged one. It’s PR779 on ECMA402. We just added a note saying that we update to reflect UTS35 and CLDR on ad hoc basis, stating what we – the practice we had been already following. + +### Summary + +The Editors had just a handful of smaller editorial updates on ES2024 ECMA-402. + +The committee was pleased with the work of the ECMA-402 editors. + +### Conclusion + +TC39 took note of the report. + +## ECMA-404 update + +Presenter: Chip Morningstar (CM) + +- no slides presented + +CM: ECMA404 is stable and boring as usual. Yay! + +### Summary + +ECMA404 is stable as usual. No news to report. + +### Conclusion + +TC39 took note of the report. + +## test262 update + +Presenter: Philip Chimento (PFC) + +- no slides presented + +PFC: As in the update I gave in the March plenary, we don’t have a lot of maintainer time available. But we have gotten some good contributions since the last time from all you delegates and community members and I think the bottleneck right now is having good review on the tests. In the maintainers group we are getting to large reviews slowly, but smaller pull requests tend to get reviewed more quickly. In general, I think things are moving along with tests for Stage 3 proposals, but I would love if the experts can help out too with reviewing tests for them, that would be greatly appreciated. We have some good contributions open right now. Since the last meeting, we merged tests for iterator helpers, thanks to MF and KG as well. + +PFC: We have a pull request open for explicit resource management, which we'd appreciate some help with review on. There’s a pull request open for TimeZone canonicalization and we'd also appreciate help reviewing those. + +### Summary + +Since the update at the March plenary, TC39 still lacking maintainer resources. But we have gotten some good contributions since the last time from the delegates and community members and the bottleneck right now is having good review on the tests. + +Tests for Stage 3 proposals are moving along. Help with review from delegates and experts would be appreciated. + +### Conclusion + +TC39 took note of the report + +## ECMA-402 needs-consensus PRs + +Presenters: Ben Allen (BEN), Ujjwal Sharma (USA) + +- No slides presented + +### needs consensus: [ecma402#786](https://github.com/tc39/ecma402/pull/786) Raised minimum/maximum fractional digits from 20 to 100 + +BEN: One smaller one that we did was we have raised the number of fractional digits from 20 to 100. And that was to harmonize with 262 in our discussion, people said that there are use cases. I believe cryptocurrency was used. And the only change we are making is increasing the number of fractional digits from 20 to 100. + +USA: Thank you. Still – yeah. It PR is sort of aligning ECMA262 restrictions with 42 restrictions raising the maximum fractional digits up to 100, as well as minimum. But the maximum valueOf minimum and maximum. Thanks Ben for the PR. This has approval from TG2, been discussed within that group. We have a couple of normative PRs. Not all of them require attention right now. + +### needs consensus: [ecma402#783](https://github.com/tc39/ecma402/pull/783) Added support for sentence break suppressions to `Intl.Segmenter` + +USA: Another normative PR that requires consensus is adding support for sentence break suppressings in segmenter. Yes. So this has been discussed by TG2 and have been sort of working through this. Yeah. It adds on support for sentence breaks suppressions which is a new extension for segmenter. + +### needs consensus: [ecma204#768](https://github.com/tc39/ecma402/pull/768) Normative: Reorder NF resolved option "roundingPriority" + +USA: Also, for review is this reorder of resolved options. So just a quick reminder, the constructors are the formatter and the one selector we have all taken a bunch of options. And then they have the resolved options method that would give you all the options of the existing object. It could pass around or use in another way. The result options are ordered. It’s an object with a number of properties. And rounding priority was added recently, as part of the number format 3 proposal. However, that – since I added towards the end, the rounding priority option is sort of a way from the other rounding options that make it a little bit harder to read or, you know, we bikeshedded on this a while and decided to move things around to make it easier for programmers. So this is by RGN. And yeah. This has been discussed by TG2 and we believe this is a good idea It still needs tests. So volunteers would be really helpful. But apart from that, this is a good change from our side. + +### needs consensus: [ecma402#709](https://github.com/tc39/ecma402/pull/709) Read date-time options only once when creating DateTimeFormat objects + +USA: This is a fairly old change, from Andre Bargull, who are the delegate from Mozilla and very involved in ECMA402. Basically, when we take in the options object for daytime format objects, a couple of these properties are read in the constructor. However, some properties are read multiple times, which is observable. So this request actually cleans up all that logic, make sure that all the options are read exactly once. And yeah, this would be a user observable change. Therefore, normative. But it would make things better. But again this has been discussed by TG2 and put up for final plenary review. These are the four PRs that we wanted to talk about, sorry if putting them together made them slightly confusing. But I would love to hear your thoughts on these and yeah. + +### Q/A + +DLM: Thank you. So we support all the changes here with the exception of the segmenter change. Sorry I didn’t notice this earlier, but because we’re using ICU4X rather than ICU4C for Segmenter this is not immediately supportable by the implementation. I have reached out to the people working on the implementation but I haven’t got sufficient feedback to support this change. I agree with Michael, it seems it’s basically adding a new API and seems strange that it is a normative change rather than going through as a staged proposal. In summary, we can’t support the segmenter change as it stands. I would be happy to discuss this further after this meeting and perhaps it can go as a normative change in the future. + +USA: Perfect. Thank you. We can discuss in more detail in the upcoming TG2 meeting. + +SFC: Yeah. First, the response to the DLM’s comment. The intent with any of these extension keywords is that it’s up to the implementation to choose what to do with them. And this is a tailoring that is in the specifications for sentence break. So like if implementations can do something useful with such – with such a flag, then they should be allowed to do something special when that flag is present. So that’s the intent here. Like, it’s not – as with all parts of things involving locale data, implementations can do whatever they want with these flags. They can use them, ignore them, this – there are other discussions we have had with, we want to constrictor rules, and this is just one of those things. This is just a bubbling of the flag into the implementation so that they can use it or not. Involving the implementation side of this, there’s a number of parts of the – of like . . . you know, very well that I talk with the others implementing this on the other side, but there’s still several changes that are required in order for the implementation to be fully 402-compliant even, and one is adopting the tailors and this is future comes for free. I don’t see it as an implementation challenge. One, it is optional. And two, it doesn’t be that hard and it should already be on there. I am also fine holding on – if it’s not like a super urgent thing, if it helps to, you know, to discuss it further. + +DE: I want to agree with KG in the chat, that it’s better if we make fewer things optional. With Intl, this is fuzzy because we permit tailorings on purpose. But still, if we are thinking about this is okay because “it’s okay if you don’t implement it,” that makes me think it’s better to wait for now. + +DLM: I want to respond quickly to SFC’s point. So even if it’s standards compliant and optional not to implement this, we could have web compatibility problems if two of the implementations support this feature and the Firefox does not. I would be concerned about that. I guess I would rather wait on this than introduce a potential web compatibility problem for us in the future. + +### asking consensus for https://github.com/tc39/proposal-intl-numberformat-v3/pull/130 + +SFC: So I was informed by FYT that this is a change that went into NumberFormat v3 proposal. And was included in the stage 4. Unfortunately, it slipped through the cracks and didn’t get presented in this group. It’s very, very similar to PR #768. PR #786, involving changing the order of the options reading. So like I guess you can say that NumberFormat, the stage 3 version had the options being read in one order. This change made them read in a slightly different order and #768 is the final order. One other change in this PR is that it also reads the options in PluralRules. This is a spec bug. We did discuss the problem space in the ECMA-402 meeting. We didn’t review the final pull request, except of course, the editors reviewed it before it get merged. But we discussed the problem space in the 402 TG2 call. But I wanted to bring it up here and I’m sorry for not getting on the agenda sooner. Since we have #768 on the agenda that, yeah, I wanted to make this aware. And it’s already shifted in ECMA-402 2023. But FYT wanted me to bring this up to be clear about the change. + +USA: Thank you, SFC. Given feedback, I would withdraw the normative request about sentence break suppressings from the call for consensus today. But and SFC added one. Can we have consensus on the rest? + +RPR: So you’re asking for consensus for everything except – + +RPR: Anyone that supports normative can speak. Any other objections to approving the other normative PRs? Or the needs consensus PRs? +1 from DE. I think we have heard support. And no objections. We have consensus. + +USA: Perfect. Thank you. + +### Summary + +Four PRs needing consensus were presented based on TG2 work and findings. + +### Conclusion + +Consensus on #709, #768, #786 and intl-numberformat-v3#130. + +TC39 Plenary decided the following: +Consensus on #709, #768, #786 and intl-numberformat-v3#130. + +'#709': when we pass an options object to the DateTimeFormat constructor, the property reads are user-visible yet irregular. This PR makes it so every property is read only once. + +TC39 Plenary decided against landing "[ecma402#783](https://github.com/tc39/ecma402/pull/783) Added support for sentence break suppressions to `Intl.Segmenter`" for now, as the optionality causes interoperability risk, and the feature isn’t supported by ICU4X which is used by SpiderMonkey to implement Segmentation under the scenes. + +TG2 are trying in a new sort of way of proposing 402 need consensus PRs. Based on plenary feedback, reach out to TG Chair or editors to improve processes within 402. + +## Import Attributes + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-import-attributes) +- [slides](https://docs.google.com/presentation/d/1XKSeyxhCiSrzqJRqZ6ioYeqHh72oBHkd9izufPiRktY) + +NRO: So import attributes updates. We got the consensus in March, on the condition of changing the wording for what “deprecated” is. And we still needed reviews for the proposal. Both have been done and there are some changes that, similar changes that came up during the reviews, but they are normative so they are bringing up today. The first one was that when using dynamic import, there are two different validations to happen to attributes because ECMA262 requires they are strings and only like – like, there are only known attributes. But they do [inaudible] like maybe the host could type in case so there are two different validation passes. The old behavior was that ECMA-262 was validating each attribute individually while reading those values. And it was pointed out that it’s better to first read all attributes – add all the values and validate. We change the behaviour from Option 1 to Option 2. And then validate. + +NRO: The second change, this was a syntax change. When it comes to static, non-computed keys, we already had syntax production for this which allowed the identifiers, strings, and numbers. And the input, like while writing the spec, we created a new syntactic production. And pointed out we should reuse the existing grammar to make the language more coherent. So the normative change is that in the static import case, you can also use number literals as keys and they get converted to string as it happens for nobjects. + +NRO: And then the third change, was both syntax when `using` has a newline for assert. Because you can see in the slide, bottom left, this was valid. So you have import segment, new line, assert, expression statement. And then new line. And block containing label string which looks a lot like these. We need the restriction. When changing the keyword from to `assert` to `with`, it is already a reserved word. So we can remove the no-LineTerminator restriction only in the with case. The proposal contains the – syntax and for a this, we need to keep the restriction. And that’s it. There are other changes that came up since the proposal was presented. And is there any objection to any of those? + +JWK: (from queue) “I hope we can have true-false literal on the RHS of the attribute.” + +NRO: The proposal involves string literals on the – as attribute values and nothing else. And that has been like this since the beginning. + +DLM: I just wanted to raise some feedback from one of the SpiderMonkey team members, mentioned this change to remove the [no LineTerminator here] for `with` and keep it with `assert` and looking at the issue, it seems like the support for this change was sort of lukewarm. I was wondering if this is something that you would be willing to reconsider just to make the implementations a bit more straightforward? + +NRO: The reason we made this change is because right now, the language only has this restriction where it’s absolutely necessary to stop semantic ambiguity. There was like most of the support were just based on this fact. + +DLM: Sure. That’s fair enough. I wanted to raise that one issue and we support the other changes. + +DE: I support these changes. As a person who had been working on the proposal, but sort of leaving it up to NRO for now, this all seems good. + +PFC: I support the changes, especially the change to make the reading and then validation of options consistent. But I would also like to ask that we document this convention somewhere so that proposal authors writing new proposal text can have a reference to this convention and so that we can make sure that’s done right in the first place, in new proposals. + +USA: All right. That’s all for the queue. Do you want to conclude? + +NRO: Okay. Well, so there is still, I guess, open the topic about the LineTerminator here. Like does anyone have like a preference for, this whether we should keep the restrictions, or are the same, or like . . . do you – does anyone feel strongly to not have the restrictions? And also, like, keep in mind, it’s possible to remove the restriction in the future, if needed and not possible to add the restriction. So if . . . anything like, I would give Mozilla’s concerns. It’s to add the restriction for `with`. And then if one day we move this, we can remove the restrictions and not be necessary anymore. + +DE: I was skeptical of removing the no LineTerminator restriction. But I was convinced that it made sense once we fully uncovered that this doesn’t add any syntax ambiguity or risks. It seems more consistent with how the rest of import statements work to not have this restriction. And apparently it doesn’t makes parsing more complicated. + +EAO: We have, I think, to revisit this topic later when we going to hopefully be able to drop `assert` completely. We could at that time, then, drop the restriction and have it be symmetric for now and update it when we update? + +DE: I think it would be complicated to do this as a multistage thing where the syntax is one way and another way. It would have tools and browsers – we should decide on a syntax one way or the other and stick with the conclusion, even if we could loosen it, we shouldn’t have a plan to go through multiple stages. + +USA: All right. That’s all for the queue. NRO? + +NRO: I guess we have views of both opposite directions. + +DE: Do we have consensus on the change or is a significant change for you + +EAO: We don’t support, but we don’t oppose this either. + +NRO: Okay. I will ask for consensus for everything as it’s presented in the slides, including the removal of the restriction for `with`. + +NRO: The three changes are: first attributes and value. The second one is to allow numeric literals as keys in attributes in static import form. And the third one is remove ‘noLineTerminator’ in restriction wait. + +DE: We have been asking for explicit support for consensus. We need support on the committee for at least 3 changes + +USA: Nothing on the queue yet, but I would like to second that. Please add explicit support for this if you are in favor. Okay. We have explicit support from MF and ACE. And no (blocking) concerns. So you have consensus. + +### Summary + +Import attributes had the following changes in response to feedback given during stage 3 reviews and implementation: + +- Clearly separate the "read attributes" and "validate attributes" steps in dynamic import, rather than interleaving them. +- Allow numbers as keys in import attributes in import declarations, for symmetry with other non-computed keys in the language. +- Remove the `[no LineTerminator here]` restriction before `with` in static imports. + +There have been some discussion about implementation complexity due to the different `[no LineTerminator here]` restriction for `assert` and `with`, but the committee ended up still having consensus on removing the restriction (only for `with`) given that it's only used to prevent ambiguities in the rest of the language. + +### Conclusion + +The import attributes proposal is at Stage 3, having reached the conditions expressed at the previous meeting for “conditional stage 3”. Consensus was reached on three normative changes for import attributes [listed above]. + +## Explicit Resource Management Stage 3 Update and Normative PR + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/tc39/proposal-explicit-resource-management/) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-Tko8bDqLrnYiAJRBw-Q?e=qImaQa) + +RBN: Hello, everyone. I am Ron Buckton from Microsoft, giving an update on the explicit resource management update proposal. + +RBN: So just a quick status update on where things currently stand. In the March 2023 plenary we had consensus on the `await using` keyword or set of keywords for the async declaration form. We had consensus on removing the look ahead restriction for `using` that disallowed an identifier named `await`. It was there for `using await` keyword ordering that was no longer necessary. So that was removed. We had consensus on merging the async and sync proposals. And conditional advancement to Stage 3, pending a final cover grammar review by Waldemar. That was completed shortly thereafter, within several weeks. Meeting those conditions for Stage 3. In addition, we have now merged the async and sync proposals into one repository. There is a proposal, it contains the sync version of the proposal. I have not yet updated that to include the merged specification text. I will do that here in the next couple weeks. + +RBN: In addition, a draft PR for test262 has been put together which should have fairly comprehensive coverage for both `using` and `await using` syntax, the new symbols, the `DisposalStack` and `AsyncDisposalStack`. There are some implementations in progress. XS is shipping partial support for only the sync portion of the proposal at this point. I have a PR that I am currently working on that is currently in draft state that adds this full support as well to engine262. And I am seeking feedback from other implementers as to what their plans are or when they are planning to look at an implementation. + +RBN: In addition to engine implementations, TypeScript is now shipping support for the downlevel emit and `using` and `await using`. It went out last week. It uses the downlevel and emit for `using`. It requires a shim for `Symbol.dispose` and `Symbol.asyncDispose`. Babel 7.22 supports explicit resource management In downlevel emit, they do provide a shim for `Symbol.dispose` and `Symbol.asyncDispose`. In the runtime side, this is a new version of Node that supports or provides a shim for the `Symbol.dispose` and `Symbol.asyncDispose` symbols and added support for (?) in advance of support coming from V8. There are a number of other shim that is are available via npm that provide shimming and adding support for `DisposalStack` and `AsyncDisposalStack`. + +### [PR180](https://github.com/tc39/proposal-explicit-resource-management/pull/180) - Ignore return value of `[Symbol.dispose]()` method when hint is 'async-dispose' + +RBN: As part of the process of working on the test262 tests and putting together a full implementation in engine262, I found a couple issues that we need to be worked out in the specification that I believe require normative changes and need consensus, although I think for the most part these are fairly straightforward. One question is whether we ignore the return value of `Symbol.dispose` methods when in an async dispose hint is passed to the abstract operations. This occurs for `await using` and AsyncDisposalStack use. When they acquire that resource, so when dispose happens currently, it treats `Symbol.dispose` like it was `Symbol.asyncDispose`. It uses the same behavior that we use for getting AsyncIterator versus iterator getting the method to call. But the result is currently that when you – when disposal happens it will look at the return value from `Symbol.dispose` and if that happens to be a promise, it waits for that to revolve. However, the sync behaviour of `Symbol.dispose` will ignore the return value. Should we potentially change the current semantics which are represented conceptually here to what is currently being proposed, which is to ignore the return value of dispose. This is consistent in the linked issue. It has been discussed whether this is consistent with what for-of does with the AsyncFromSyncIterator. I believe that it is and it isn’t because that acts – that case acts as both a argument for and against because there are parts of the AsyncFromSyncIterator that do not await. For example, the result of `next` is not awaited from AsyncFromSync, even though the value is awaited. But that is a little bit after discrepancy that means that this could go either direction. + +RBN: So I am currently looking for whether we would have consensus on this change. There is a topic I am not sure I am clear on. + +CM: So as we were reviewing this, at Agoric we’re all fine with the substantive content of this PR. But a couple of people were uncomfortable using the term ‘hint’ in spec language, when it’s driving something which is normative behaviour. + +RBN: The time that I put this together, I believe I was basing the same behaviour off of what the get iterator does and I believe it was using the same term. + +CM: Yeah. We are just putting a gentle request, if possible, different language could be used in the place of the word “hint” + +RBN: ‘toPrimitive’, if you in one of the states, otherwise, ignore it. So I think there is some similarity in spec language how hint is used in those two cases, but I could understand the concern and I think that’s more an editorial change + +CM: It is largely just clarity in the editorial text. It’s not a concern about the proposal. + +DE: Yeah. I agree that this is something that probably the editors should look at, given the usage across the spec – + +CM: We noticed there are other uses that are misleading in the same way. + +MF: I think that this should be left to editorial discretion and we shouldn’t prescribe how this is specified. + +USA: That’s all of the queue, RBN.. + +RBN: Yeah. So for this PR, specifically, I am interested in consensus. I will state that these have – the PRs have been up for over a week. They have not yet been reviewed. So there is some review that needs to occur as well. But given the general direction, I am interested if there is any opposition or if I can get consensus on this change for #180? Or would it be better to ask for consensus at the end of the rest of the discussion of these PRs? + +DE: Can you go through and summarize all the PRs for consensus? + +RBN: I will go through the rest of them and come back and discuss them individually then. There are other ones to discuss here. + +USA: In this specific PR, NRO has strong support for this behavior. + +NRO: `Symbol.dispose` is not meant to return a meaningful value. So like, if it returns a promise, that promise – meaningful use should not be – ignored as it happens when it’s called synchronously. + +KG: I support this PR inasmuch as its semantics are like `await undefined`, that is, that it consistently takes a microtask queue turn. + +RBN: That’s the semantics I am requesting. To that makes sense. Anyone that is opposed to this? + +DE: This seems to reduce the expressiveness. Because the current semantics you could use `using await` to take it kind of conditionally awaiting maybe. Probably that’s not a problem – given that what KG said about how it will take a turn anyway, acting like `await undefined`. I don’t think we need whatever flexibility that was given previously. I just wanted to note this. + +RBN: This is the same as supporting sync iterator in `for await of`, to give kind of broad coverage of what you can actually iterate over. This is primarily supported because you can – like any declaration list, have multiple `using`s in order in the same statements. And if we said that 08 is only used with async disposals, then you could directly have to jump back and forth between `await using` and `using`s in one block. Whereas, this you can just use the same next to each other, but the reality is, the underlying behaviour is that it should act like it’s a synchronous `using` with the exception, you said await, there is an await at the end of the block which is prior consensus. Yeah. I don’t think having the – having return value from dispose do anything from either case, it’s essentially a bug. If you return a promise from dispose, in synchronous code, you are never going to get anything with the value. If it throws as a rejection. Having do something different, in await using statement makes it a bit – I think, more confusing and I think having it be ignored as being proposal here is the clearer approach. + +USA: There’s nothing else in the queue + +RBN: So I will – take that as consensus on 180. + +### [PR178](https://github.com/tc39/proposal-explicit-resource-management/pull/178) Move DisposeResources to Evaluation of FunctionStatementList + +RBN: So the next PR is currently, so there is an abstract operation called DisposeResources used in a number of places. Bakely, any time you exit a block scope, to is used in block, this is used in for loop evaluation, this is used for in body evaluation. Basically, any time that you exit one of the block scopes it needs to do resource cleanup and evaluate those dispose calls. When this happens at the end of a function body, this currently happens in 4 different places. The it happens at the end of evaluate function body. But then it also happens in the abstract closure inside of GeneratorStart, AsyncGeneratorStart and AsyncBlockStart when you resume execution at the end of those evaluations. + +RBN: There are unfortunately a couple of bugs that are in there that were discovered in the engine262 implementation. One the dispose calls after the execution context is removed from the stack. Depending on how implementations handle how an execution context and agent and everything are associated that is problematic. You might not have the correct realm to respond with exceptions. Therefore, it’s better that happens earlier. And there is a bug in the specification text currently in that, if you call GeneratorStart and AsyncGeneratorStart, you can pass in abstract closure instead of a parse node. Which occurs when you are doing CreateListFromIterator and a couple of other AOs. In those cases, when those generators and async generators are constructed with an abstract closure, they don’t set up the surrounding state that you need for the lexical environment because they are not expecting to use the lexical environment. As a result, having a disposed resource at the end will fail because the lexical environment has not been established for that. Therefore, what I am proposing is a change to remove DisposResource in the evaluation of the function state list, parse node. Currently there is no specific callout for that. It will fall through evaluation when it eventually gets to ‘StatementList’. But function statement list is shared for both function body, async function body, and generator body. Therefore, at thats sing the place that all 4 of them used, only worked with parse nodes, used when those lexical environments have been established. So it seems like a better place to do this. It would not with the exception of the bug around execution context and how it affects association with realm, it would not otherwise be an observable change, but due to the possibility of considered an observable change, I have listed this as a normative PR. So I would like to seek consensus on this change as well + +USA: We have a clarifying question. + +NRO: If I wrote some code using the proposal as it was yesterday, does this request change somehow the behaviour of the code, or is it a spec bug? + +RBN: It should not change any code. The question is whether an implementation might have an issue due to trying to run code when the execution context that was associated with those resources is not longer on top of the stack and what that means. In engine262, it caused a bug in the engine, which is why I needed to make a change in the implementation. Otherwise, it should not be observable to the end-user. + +NRO: Okay. Thank you. Then I support the change. + +USA: We have DE. Would you like to add to that? + +DE: Yeah. This change looks good to me. Good to fix bugs. We don’t have to conclude whether it’s normative or not. There are all kinds of disagreements about that. + +USA: All right. I hear consensus, Ron. + +### [PR175](https://github.com/tc39/proposal-explicit-resource-management/pull/175) Add missing calls to NewDisposeCapability AOs + +RBN: This one is again not really user observable, so this might not be considered a normative change but, but worth adding. This is essentially a spec bug. The context is that there is an AO called NewDisposeCapability. This sets up the disposal resource stack that gets added to when `using` declarations are evaluated. And gets exhausted when the current block scope ends. It is currently set to the DisposalCapability slot of a declared environment record in the new declared environment AO. However, function and module environment records inherent from declared environment records in the hierarchy. However, they have their own functions, their own AOs for establishing those environments that are missing calls to NewDisposeCapability. The result is that sometimes things don’t work in functions. We only create a new declarative environment in certain cases. I think it’s when it has to do with when parameters are bound and if there’s an `arguments` variable in the body, we create a new lexical block for the parameters, but we don’t always do that so it’s a bit of a spec bug. So I don’t know that this is going to be an effective code but to cover bases, it was worth bringing up in plenary. So is there support for this? Would anyone object to consensus? + +USA: There’s nothing on the queue yet. There’s no opposition either. + +DE: +1 The first one seems like an important bug fix and you found and fixed this. + +USA: Yeah. I think given that it’s a bug fix of this nature, I think it’s safe to say that without any opposition, I think you’re good. + +### [PR171](https://github.com/tc39/proposal-explicit-resource-management/pull/171) Correctly use hint in ForIn/OfBodyEvaluation + +RBN: So this normative PR is actually a bit of a spec bug as a result of the merge between async and sync versions of the proposal. But the async version of the proposal didn’t have this implemented the sync one did. So I want to make sure this is called out to be clarified. In the ForIn/OfBodyEvaluation, when we assign the binding for the loop variable, we make a call to initialize reference binding and that call currently fails to account for `await using`. It only covers the normal letter const. It’s used to use a hint variable that is assigned at the top of the algorithm steps, but after the merge failed to account for that, this is observable in `await using` in a for-of wouldn’t work currently and this essentially is designed to fix that oversight. + +NRO: +1 This request clearly does what we intend the feature to be. It fixes a spec bug. I think we are agreed on semantics using this + +RBN: This should match on the agreed upon semantics. It was an oversight in the specification. + +### [PR167](https://github.com/tc39/proposal-explicit-resource-management/pull/167) Add missing `.prototype` property entries for DisposableStack/AsyncDisposableStack + +RBN: All right. And the last normative PR that I have, I believe, is this . . . which is PR167. Currently the specification text is missing the introduction of a prototype property in the DisposableStack and AsyncDisposableStack classes. These are designed or described as in the same way all other built in classes are currently. The expectation is that a `prototype` property would exist. Just like they do for other constructors and this is again a spec oversight to be addressed to work the way they intended to work. + +RBN: So yeah. Is in any opposition to this change or should I expect consensus? + +DE: Yeah. Again, it seems like a good bug fix. + +### Open questions + +RBN: All right. So this leads to the last part of these slides. This is something that I was discussing with some committee members late last week. And based on a discussion going on in the issue tracker, so I wanted to time to discuss this. I will note that the outcome of this currently is not looking for consensus. I am trying to get feedback from the committees to determine what direction to go here. So this open question is, how can we ensure developers use `using` over `const` for disposables? It’s easy to write `const` when you meant to use `using`. The position we achieved consensus is, this is something to leave for linters or typed systems. And in a build step to guide you towards the use of the `using` declaration over `const` when working with disposables. It’s brought up by committee members that this isn’t enough. There is a partial remediation for this. For native file handles, any type of native resource that can’t itself be garbage collected or observed within JavaScript as being garbage collected, no node JSFS promises. The native for file handle has its own capabilities to make sure that if a file handle goes out of scope and is not interacted with and not reachable, when it’s garbage collected it will close the handle for you. It is possible to do in userland with FinalizationRegistry as a way to do cleanup. And it is a good practice for anything that talks to a native resource that if someone forgets to dispose it, the native resource should have some fallback to be released when it’s garbage collected. In other languages that have this capability, that’s the best practice. In the ecosystem, it’s already implemented this way in many cases. But that’s essentially a partial remediation for native resources. But the thing is, this may not necessarily be enough to catch the cases in user land for non-native resources that don’t already have the semantics. + +USA: First up we have a clarifying question by NRO + +NRO: By “the host should ensure they are released”, should release them or not or just like thrown or report a warning if the user for got . . . + +RBN: I suppose that the slide is a little bit overstating the expectation here. It is – in most of the documentation, from other languages that have this capability and I have a separate PR, I did not bring up here, at the moment, but in – we are looking to discuss this, whether or not should be recommended that any type of cleanup should be done behind the scenes, if possible. Normally, this is just a best practice because you don’t want to leak file handles, for example. So right now we don’t currently make any recommendations for this. But this, again, has – is a remediation that has been discussed before, if these native handles exist, and are opened in some way, that there is some mechanism for those to be closed and not leaking. + +USA: We have a couple more topics on the queue. And a little under 20 minutes. So let’s go. + +RBN: I don’t want to spend time on the partial remediation. You can do so on the issue tracker. But to go back into this, the question is whether or not this is enough? Generally, many languages that have this capability don’t make a specific demand that you use – that you have to use a `using` declaration. In C# you don’t have to use a `using` declaration with something disposable. You may destroy that variable. You are doing that because you are either going to imperatively do cleanup or you are going to create a larger disposable through composition that holds onto more resources until its lifetime is exhausted. Mandating `using` isn’t necessarily the best option in some cases. But it’s also perfectly feasible to forget to use the right declaration, and without the familiarity. We discussed alternatives and one we looked into and was being discussed last week was a way to roughly emulate what Python does with its enter/exit magic methods in context managers. Python approach is one of the languages and features we reference in prior art within the proposal repository. The basic idea would be that you could optionally, so this is – you can write an object that has a disposal method and that is a perfectly reasonable to use as a resource. But you could optionally opt into a more explicit form of resource management. When the declaration is evaluated or initialized, that is the – if it has a `Symbol.enter` method, that will give you the actual resource. This means, the actual resource that you want to interact with is behind a symbol named method, making it harder to get to. So if you were to say, `const x = y`, `y` is not the thing you want to interact with. It is something that gives you the thing you want to interact with in that case. So if you really wanted to use this resource in a about manner that is consistent with a compositional approach or you want to more directly or imperatively manage the resource, you would have to explicitly call into `Symbol.enter` to get the result. Mandating this be part of the proposal means that existing host APIs like node’s file handle and readable streams, any DOM-based approach that already exists, if we mandated that were part of this, then they have to add a `Symbol.enter` method that returns this. So that the existing API works or a separate API. Python’s API that is a default that returns `this`. Their abstract context manager has a default that returns `this`. So the approach that we are considering is that since the default behaviour would be to have something return `this`, this we could have that be implicit behaviour if you don’t have `Symbol.enter`. The method itself is entirely optional. If that’s the case, this wouldn’t be a blocker to the proposal as it exists today, but a follow on proposal and advance – as we continue advancing through the stages process and look towards Stage 4, this wouldn’t be a blocker for Stage 4 because we could still advance lightweight dispose, and come back to a follow on to enter this mechanic I. However, there is a concern about whether that would be potentially confusing in a declaration form like `using x = y`. `x` is not holding the value of `y`. It holds the value of `y[Symbol.enter]()`. This is something we are discussing below. This is about python context manager and how they fit into JavaScript and issue #159, we were discussing this with additional background information. It’s a long thread with a lot of implementation details about alternate proposals where this came up is more towards the end of that discussion. I would like to say that right now, I am mostly seeking interest in the committee’s appetite for exploring this but not seeking advancement or consensus on this specific feature at this time. I would like to take a couple of minutes and we would probably finish before the timebox is elapsed possibly and collect feedback from the committees on this possible approach + +USA: All right. So the queue has remained from the last point, we can resume that first. + +DE: Yeah. Thanks for going through the queue. I don’t see how these two things are related, but they are both important things to discuss. First back to the GC thing. I disagree with your recommendation, that everything just be disposed of by GC automatically. Probably this makes sense as far as fault tolerance is concerned: to recover from an error where they failed to dispose of the resource, it’s good for the GC to dispose it. It’s important to be discouraged from a developer perspective to rely on this because it’s unreliable. So I hope that if we make the recommendation, I would agree to logging a message indicating the programmer’s error. I think NRO was getting at this in their comment. + +RBN: I brought this issue up and I put together a poll request on the issue tracker. And the discussion there, other folks that are from the committee that are also not certain this is something we should make a recommendation. So I am most likely not going to merge that PR. As I mentioned before, in the C# implementation and in the C# documentation for the disposable interface, this is a good practice on their side. And there are native – built-in libraries like a safe native handle that are designed specifically for this case, that it basically is just a wrapper for a handle that you indicate how dispose works, so you can pass this handle around and they with dose explicitly or implicitly disposed and avoid leaking handles + +DE: It’s common, where programming language add these things based on GC and then realize it’s a bad thing to do. So the fact that some programming language has it in their practices doesn’t seem like sufficient evidence to me. I am fine with not having a recommendation. If we have a recommendation, I would like the opposite polarity. + +DE: Let’s go to the enter and exit slide. This is an interesting design. I am not opposed to this for the design of resource management. But I think that this should be part of the core proposal. If we’re considering this, if we want to seriously consider adding this, we should demote to Stage 2 and make this part of the initial proposal. We shouldn’t consider such fundamental ideas as an add-on later. It might be compatible to add later. Strictly speaking, it’s bad to add async operations later. Maybe it would be compatible enough. But yeah. This is a design that in discussing resource management with YK he suggested that we have such an explicit enter part. I really do think that is pretty unrelated to whether we have GC clean up resources, regardless, you could allocate a resource and drop it on the floor. + +DE: Also, I want to note any comparison with Python we should be cautious about, because of the use of reference counting, making, you know, the lazy pattern of forgetting to use `with`, the equivalent of `using` more tenable because the reference count more deterministically goes to zero. So just be cautious about the comparisons. But that doesn’t cast did you know on the protocol which is `using`, but a core part of the using protocol and so we should decide whether or not we want to go with it up front. + +RBN: Well, part of the reason why I consider this as a follow on and it’s also listed in the issue #49 is that there are two sides to this. dispose is more of a lightweight approach. A `Symbol.dispose` method, it doesn’t get any inputs or care about output, but something that happens when the block exists. The more comprehensive context manager approach, Python’s exit, is not the same as dispose. It is more powerful because it gives you the ability to intercept the exception that has been thrown and possibly throw a separate exception or swallow the exception. That’s not something that I am particularly comfortable with, working on a static type system, because it makes control flow much harder to reason over. You can’t know whether or an exception actually will exit a statement because now, exiting any block could potentially result in exceptions not bubbling out, and instead being swallowed and control flow continuing. I haven’t been comfortable with that and was not necessarily supportive of that as a core part of the proposal. I looked at full context manager support. I felt that was something much more powerful that should come separate from a more lightweight dispose mechanism. And enter and exit were paired together and that’s why I considered them part of that. I felt that we could augment `using` with these capabilities but it wasn’t something that I wanted seek for the proposal, especially when we were cutting other features for this proposal to reach more of a minimum viable proposal that we could get to advance. + +DE: Yeah. Those sound like great reasons to not do the proposal. I am happy with the current state of the proposal. + +RBN: I am also happy with the current state of the proposal. But there – I bring this up only because there – there was feedback from other committee members, they were interested in discussing this. + +USA: All right. There is a reply to this and also a number of new topics, but I have to say, we are at time – we have 5 more minutes for this item. If you wish to continue, the queue is quite big. I proposes we record consensus for all the PRs you presented today. And continue with the topic later in the meeting. What do you think, Ron? + +RBN: I think that would be fine. + +USA: Okay. In – what about we continue with this topic and then save all the remaining topics for the follow-up? + +PFC: I wanted to add to what DE said, this could pose a problem for embeddings because there may be one final garbage collection when the embedding is being shut down. So if the disposer throws an exception, the machinery in the embedding that handles or logs the exception may be destroyed or the disposer callback may refer to resources that the embedding provides that have already been shut down. So that’s quite problematic. I'd prefer we don’t do that. + +USA: All right. So . . . that was that topic. RBN, do you want to conclude for today and resume later this this meet? + +RBN: Yeah. That’s fine. I had nothing else besides this on the – in the slides. + +USA: All right. Great. Thank you. Would you like to say some final conclusions? + +RBN: Sorry. I stopped sharing. So yeah. From my understanding, the consensus was on the PRs that were provided in the slides. Those five specific ones. We will continue the question discussion when we have some time in overflow. + +RBN: The first was PR #180. Ignoring the return value of `Symbol.dispose`. Async dispose would have been requested. RP #178 Moving resource resources to remove complexity and fix a spec bug. PR #175 was adding the missing calls to the new dispose capability AOs. PR #171 is the correct use of the prealready determined hint to ensure that the reference binding is correct and accounts for `await using`. And PR #167 was adding the missing `prototype` property for sync and async stacks. + +RPR: Any objections to those? We have consensus. + +RBN: All right. Thank you. + +### Summary + +This is work in progress. An Update has been given since what has happened since the March 2023 TC39 meeting, what has been completed, what not, what the open issues are. + +### Conclusion + +The committee reached consensus on several normative changes (or bug fixes) on explicit resource management: + +Consensus on PRs: 180,178,175,171 and 167. + +Debates about the appropriate use of GC and Symbol.enter are ongoing and will take place in overflow time + +## TG3 update and chair appointment + +Presenter: Chris de Almeida (CDA) + +- https://www.ecma-international.org/task-groups/tc39-tg3/ +- [slides](https://drive.google.com/file/d/1MPHGzy4aH_vRnduuuuUucP7xq_clcrK2/) + +CDA: So TG3. Update and convenor confirmation hopefully. Next slide, please So as we discussed at the last meeting, TG3 has not been meeting due to lack of chair. So we got together recently to discuss resuming meetings, what topics we were interested in covering to begin with, meeting schedule, and the proposed convenor group. So we are still ironing out details for the meeting schedule. The Secure ECMAScript meeting folks graciously offered to sacrifice one of their monthly meetings for this – they meet every week. We will use one of those meeting times every month for TG3. But we also wanted to have a more APAC-friendly time, so we will alternate between the current meeting time at 12 central and then alternate between that at a monthly or bi-weekly frequency. We are still working that out. + +CDA: One small detail. We had to tombstone the matrix room. Because we were unable to get hold of the sole admin. The new TG3 room is already created and can be found in the TC39 space. + +CDA: The proposed convenor group is myself and JHD I think at this point, I will need to drop, and I don’t know if JHD is in the Zoom . . . but I don’t think so. + +[Private discussion about convenors for TG3] + +RPR: The discussion has concluded and we have welcomed JHD and CDA back in. So, CDA, the answer is that, yes, you and JHD are happily the convenors of TG3. Congratulations! + +CDA: All right. Fabulous. +(applause) + +### Summary + +An update on TG3 has been given: + +- https://www.ecma-international.org/task-groups/tc39-tg3/ +- [slides](https://drive.google.com/file/d/1MPHGzy4aH_vRnduuuuUucP7xq_clcrK2/) + +### Conclusion + +TC39 noted and approved the update and had Consensus on JHD and CDA as new convenors of TG3. TC39 wished them successful work. + +## TG4 charter and chair appointment + +Presenter: Jon Kuperman (JKP) + +- [repo](https://github.com/source-map) +- [slides](https://docs.google.com/presentation/d/11Cv2XnTZfd9yBCq1WctKzSwc9Q2ZJkhklOVTbNyUyxU/) + +JKP: Hello. I am John Kuperman. I work at Bloomberg. I am proposing a charter for TG4 source maps task group. This is my first meeting. Last time DE had brought up this and we were asked to come back with an official charter and program of work so that’s what we have done. Just a little bit of current state on source maps. The specification lives in a Google doc and it’s pretty sparse. There’s quite a lot of ambiguity which has caused implementation differences at the browser level, at the generator and post-hoc(?) levels. In addition to that we have had the GitHub repo, we have aing it tore correctness issues, people are unsure what the specification means and those issues have been piling up a lot. And also, there’s quite a few features that source maps lack for performance and expressiveness that companies have developed third party solutions for. I have got linked in the slides. Bloomberg has pasta-source maps and Sentry has another solution to pass the function extent information through. + +JKP: This is the scope of the charter. I am not going it read it out loud, but the slides are linked in the agenda. I will pause for a second so people can read it. And then I did share out the slides for this. And then we also have our program for work, which I probably shouldn’t read the entire thing. But the main points. To focus on correctness, go through the specification, August the correctness issues and make sure that we have a very tight and well-defined specification for what is a conformant source map. After we have source map expressiveness that the community is adding and also want to make sure we work closely with other standards bodies such as W3C on the all work we are doing. + +JKP: Right now there’s a great list of participants. But we're looking for more support from people. Another slide I will covered the meeting we are doing but these are the folks participating so far We have some exciting proposals that we have started talking about. I wanted to cover 3 quick to poem Sundays the scope. So one is the definition of a column. So we have like source maps referring to line and column number. One is about the scope and variable names and debug IDs. So these are 3 things we have been doing a lot of discussion about and up – trying to work on coming up with better updates for the specification. + +JKP: One is column IDs. So we figured out that browsers all agree on line number, but disagree sometimes on the column number, whether the column is like a code unit or a code point, and what this would mean for formats like Wasm. A lot of discussion around here, Armin from Sentry brought information about unicode characters and what they return as far as column numbers and where they differ with browsers. + +JKP: Function name mapping: I took this from Bloomberg’s pasta source maps showing right now with the source maps they have the names for functions like the code on the right. Whereas with something like this addition you will see the full function names in the decoded stack. + +JKP: And the last one was debug IDs. Source files get built together by bundlers and the source map has a hard time linking back. The idea to add a debug ID to each source map and linking back to the comment at the bottom of the file perhaps which would have make it easier for post-hoc debuggers. + +JKP: How we are working. Everything is done in [a GitHub org](https://github.com/source-map). Repos for the spec. Repos for RFCs and testing. A monthly Zoom call discussing the correctness in the spec is the number 1. We have temporarily and additional monthly call for naming. This is like variable and scope names getting passed through. And both of these are on the TC39 calendar and I linked to some PRs we have this. Clarify text around JSON over HTTP or like the precedence between HTTP headers versus inline annotation for source maps, things like that. That’s all I had. Thanks very much. This is requesting consensus on chartering a test group with a scope and program. And for this, do I physically leave the room? + +DE: Yes. So we would like the committee to review the scope and program of this. As well as, John and I want to be co-covnenors of TG4. I guess for both of those propositions . . . + +EAO: (from queue) “any reason not to mention CSS source maps explicitly?” + +DE: I would be happy to add that to the scope. It’s definitely a necessary property of source maps that they remain multilanguage, and continue to support CSS in particular as well as WebAssembly. If you could suggest a wording how to capture this for being multilanguage, then I would be happy to make that kind of change + +EAO: Absolutely. Sorry, what was the request for now or sometime later? + +DE: Maybe during this week. Because I would like to conclude on the charter to get this approved by the committee and put on the website and make an actual TG. Yeah. Last meeting, I was trying to charter this, but later heard that Ecma found it wasn’t chartered specifically enough because the Ecma bylaws need to say you need the scope and program of work approved by the technical committee, which I had written down more vaguely. + +EAO: To follow on, how or where should the scope change be proposed? Is there a repo for this? + +DE: Yeah. It’s not any one repo. You can see the charter on the slides. So maybe you can edit that and then just send me the final text. We could have an overflow item to approve that as well as conveners. We don’t need to step out of the room right now, but we could do that once we have the final charter. Yeah. + +MF: Are we going to bring the repos under TC39 and manage under the same – all the same processes we used for all the other repos? + +DE: Yeah. I thought we would move the repos to the TC39 org. If not, we should indicate the source map organization is under TC39. Source maps is currently divided into 3 repos. This division makes sense. And as far as adopting all the same processes, I think the processes for how the source maps TG will work, and how it will, you know, end be proposing a standard to the committee is a little TBD. But I think just for archiving, it makes sense to bring it into that org. Do you have any other thoughts? + +JKP: No. + +CDA: Is the – does the previous slide have the scope? Okay. Thank you. + +DE: Of course. + +CDA: Yeah. Just as a reminder for everybody, what appears on the website for the task group is the scope and the program of work. + +CDA: Okay. I don’t know if there are any hands raised or anything in the room, but there’s nothing on the queue. So I think we can proceed with – + +DE: On the chat, Eemeli proposed that we could just change the scope to saying “including CSS and ECMAScript code”. And if we could agree on that change, then we can go to approving this now instead in an overflow topic. + +DE: How do people feel about that change? + +CDA: We could also consider omitting the parenthesised part entirely + +DE: I don’t like that. I would like to refer to ECMAScript specifically. On the other hand, arguably the current text already implies it could include more things and plug in the changes to specifically mention CSS. Yeah. + +EAO: I am asking for CSS included in particular because CSS is not really a programming language. So it’s easy to look at this and not realize that CSS is included also. + +DE: Yeah. I think it’s clear to participants in the group, that CSS needs to be under consideration and makes sense to document it in the scope + +JKP: I totally agree. It’s been coming up and the reminders are healthy and I like the idea of having it in the scope. + +DE: Okay. Great. So Jon and I will leave the room while you all consider to both charter the group and approve us as co-conveners. + +(private meeting) + +Explicit support from EAO, NRO, PCO, LCA, CM, CDA, JWK + +CDA: Well, welcome the new convenors of the freshly chartered task group 4 for source maps. + +(applause) + +CDA: All right. I don’t see anything in the queue. + +### Summary + +JKP presented the draft TG4 charter including proposal for the TG4 Convener. + +- [repo](https://github.com/source-map) +- [slides](https://docs.google.com/presentation/d/11Cv2XnTZfd9yBCq1WctKzSwc9Q2ZJkhklOVTbNyUyxU/) + +### Conclusion + +TC39 unanimously decided to accept the proposals: Task group 4 for source maps has been chartered chartered with DE and JKP as co-conveners. One change to the presented text: Also CSS will be explicitly called out as in-scope, alongside the existing EcmaScript mention. + +## Updates from the CoC committee + +Presenter: Chris de Almeida (CDA) + +- No slides presented + +CDA: Updates from the code of conduct committee. I don’t have slides for this. The only update we have is we received a report back in March, it was addressed, we consider the matter now resolved. One snag with that, however, is that per the code of conduct itself, we are meant to resolve any reports of violations in a speedy manner. Ideally, within a week. And it took outside months. And the reason it took us months is because we could not – we did not have a quorum in meetings. This surfaced a problem that we have, and hopefully it’s in the fast. The members of the CoC Committee looked numerous but a large number were not active and did not come to the meeting. We need a functional code of conduct to deal with any reports that come in and the other activities that the code of conduct committee is responsible for. + +CDA: To that end, we pruned inactive folks and then put out a call for additional participants. And to that end, we are thankful that both TAB and RCA have agreed to join us on the CoC Committee. The way that we handle new folks for the committee is a little different than we have done for the convenors, of the task groups, for example, we are not going to be doing that in real-time here. We ask if any objections to the new folks joining, again, that’s TAB and RCA, please let us know before the end of plenary. You can do that by either contacting, you know, the existing code of conduct email or just reaching out to one of the committee members directly. Those are currently myself, JHD, and MPC. That’s it for the CoC update. I see an item on the TCQ. + +MF: Do we need to get reports from the CoC group on attendance? + +CDA: Yeah. I wanted to know if there’s some way we could not get – get ourselves into this situation again, in that we can – because you said there are like many months between the report being received and action being taken. There were also plenaries between then, it seems. So if we were receiving reports on whether the CoC group was being effective, we could maybe monitor for that. + +CDA: That’s fair. Yeah. Probably we missed an opportunity in May to mention this. I guess the caveat there is that we hadn’t yet addressed the report, so it might have been premature to update the committee. But I guess what is the . . . specifically the ask? + +DE: I think you have mentioned this to the committee before, that there weren’t enough people in the code of conduct committee. I appreciate you mentioning that and did this call for volunteers and you have pruned to the list so that it’s not inaccurate anymore, as it once had been. + +CDA: Okay. Michael, did I answer your question? + +MF: I think so. I guess maybe it would be nice if the CoC group had an action item to see how they could prevent the same situation in the future. It may already be resolved, but it would be nice to spend time thinking about that. + +CDA: Sure. I appreciate that. I think part of the reason why it maybe wasn’t as streamlined. I don’t know, it could be because we get very few reports. This the first one we have ever gotten since I joined the CoC Committee. So I guess we were sort of taken by surprise that we weren’t able to have have quorum, but yeah, we will try to keep this in mind for the future, absolutely. And we have impressed upon the new folks joining that it’s – you know, it’s really important they are able to attend the biweekly meeting. Thank you. I am not seeing anything else in the queue. + +### Summary and conclusion + +An update on the CoC committee was given. TC39 Plenary noted the update. + +## TC39 Public Calendar update + +Presenter: Chris de Almeida (CDA) + +- No slides presented + +CDA: Next topic will be the TC39 public calendar. + +CDA: All right. So we talked about this – we have been talking about it for quite a while, but at the last plenary we agreed on a path forward. So there are now issues for every meeting that appeared on the TC39 private calendar, there are now issues in the reflector for each of them, for whether or not they should go on the public calendar. I’ve been asked what is the guidance around what goes on the public calendar? So as we discussed at the last meeting, there is no hard and fast rule about what belongs there. And there are some different – I don’t know – physical [fof] [kal] views on what goes up there. Some people take the tack that if it’s not something generally open to the public, then don’t put it on the public calendar. Ultimately, it’s up to the meeting participants. I think some of the low-hanging fruit for the public calendar like some of the outreach meetings. Other things like plenary is a good example of what we have on the public calendar, even though it’s not a public meeting, per se. + +CDA: So I think the easiest thing for everybody to do would be, at the next meeting, that you have, maybe spend a couple of minutes at the beginning or the end deciding on, first of all, whether you want it to appear on the public calendar, and then if you do want it to appear on the public calendar did a, you need to remove anything in the description, notes, documents or anything that you wouldn’t want just out there on the public calendar and the other consideration is just the invite list. If the meeting can be set to either show or not show the invitees, and if you’re going to show the invitee list, you should make sure that everybody who is on the list is okay with having their name and email displayed on the public calendar. + +NRO: where can we find links to the calendars? + +CDA: I can – I will add that to the matrix chat. The short answer; the private calendar links on the reflector. The other is the ‘how we work’ repro. I will post the links to these – to the issues containing the details. + +NRO: Okay. So maybe rather than post the links, is it possible to add to the reflector right now for the old calendar, I like to go search through to find it. It would be good to have an easy way to find not only public, but private one for us. + +CDA: Sure. I agree. We should absolutely do that. I will take that as action. We also need to add it to, you know, the emails we send out for newly on boarded delegates and invited experts. Next we have Shane. + +SFC: (from queue) “who has edit access?” + +CDA: Right now just the chairs: myself, RPR and USA. If have strong feelings about if some other folks would like to have edit access. If that’s helpful, I think that’s perfectly okay. + +SFC: The reason I put this question here is because like, I scheduled the meeting for the Temporal Champion as well as TG2 and occasionally some other ones. And like, I have the Mozilla calendar, the private calendar, just in my Google calendar and it’s easy to add events and. They are not on the calendar because of private, because that’s the place we add them easily. But like . . . like, if there’s – going to be a higher bar for adding things if – TG2 should be the public. A full task group and like an official meeting. Right? It might belong on the public calendar. But like I don’t – it seems more tricky to like get that events added there. So I am raising that as a potential like challenge + +CDA: Yeah. Thank you for the question and this brings up a good point. I can clarify. And that is, an important point: we actually don’t manage the public calendar in the public calendar itself at all. We actually – the private calendar is actually the system of record. That is the calendar, the one calendar to rule them all. And anything that appears on the public calendar only appears in the public calendar by way of being invited via the private calendar. The idea is that we don’t want to maintain two calendars and deal with any concurrency issues. We have been updating the meeting time and forgot to do that in the public calendar. We only manage everything still on the private calendar. + +SFC: I see. The model is that the public calendar has like an email address to invite to the event. It sounds good. I think I have done that before with other things too + +CDA: Yeah. Even though like I said, the chair is the only people who have edit access to the calendar, but all the edit happens by way of the private calendar anyway. So . . . yeah. Again, all that you do to make the item appears on the public calendar is invite the calendar ID to the meeting on the private calendar. Okay. Any other questions or comments? Nothing in the queue. + +RPR: I think we are good. + +### Summary and conclusion + +An Update on the praxis regarding TC39 Public Calendar was given and discussed. Tc39 noted the discussion. + +## Resizable buffers bug fixes (#120, #126), grow refactor, then maybe for Stage 4 + +Presenter: Shu-Yu Guo (SYG) + +- [proposal](https://github.com/tc39/proposal-resizablearraybuffer) +- [slides](https://docs.google.com/presentation/d/1Q-mm99CchYh2ZqJjz3Jb4BLTJRAqK7El3LQJ1tP5vDE) + +SYG: So yeah. This is getting resizable buffers ready for Stage 4. And in doing the PR against the main spec, some last-minute bug that is have come up that need normative fixes that’s the important thing. And then I think it probably makes sense to wait a meet to go ask for Stage 4, but we can come to that at the end of the presentation + +### [PR120](https://github.com/tc39/proposal-resizablearraybuffer/pull/120) Move detach check after argument coercion in resize + +SYG: So first fix: this is an outstanding one that just dropped off the radar. It was reported by the Moddable folks a while back, but I missed it. Apologies for that. This is #120. The bug is that the resize method takes a single parameter named `newLength`. And there is a detached buffer check before the toIntegerOrInfinity we can all arbitrary user code. Which means that there also need to be a buffer detach check after it’s done. As annoying to do a check and then do a coercion and do an immediate check again. So my proposal fix here and my preferred fix here is to have a single detached buffer check that is after the coercion. This does – the general design principle to hold to of do receiver checks than coercion and check arguments in left to right order And the – this is an exception to that rule, but it’s an exception that behave made in the past specifically for detached checks, specifically to avoid this kind of double-checdlly still, do receiver checks and then argument coercions and checks left to right, except for detached checks, we can do it once in order to avoid necessary rechecking. There are alternatives. As shown in the discussion in number 120. Such as we could just completely switch the argument checking order. We could say we are going to check all arguments first and then validate the receiver. + +SYG: The current proposed fix is more targeted than that. We still keep the relatively the same order, except if they are detached checks, we make the exception for the detached checks. And given that the precedent we have so far for another method, `transfer`. We did this kind of single detached check as late as possible already for `transfer`. So we propose to do the same for `resize`. Before moving on to the other topics, I would like to get consensus for this one or take any questions, if there are any. + +RPR: So far, no questions in the queue. So let’s – are there any objection to this change? + +DE: Do we have tests for this change? + +SYG: There will be tests for this. I am in the middle of converting all the staging tests for resizable buffers, making a PR moving them in the right directories, which is also the reason I said it might not be quite ready for Stage 4 at this meeting. + +DE: Okay. Yeah. This change makes sense to me. It seems good. + +PST: Just to mention that the issue was found by a fuzz tester + +SYG: Good to know. I think it’s – currently, it’s – I saw it just now this morning when I opened the PR, a JSC commit refer to the issue. I think this was manifesting as throwing an error. This change the spec behavior – where it was unclear before. The current spec behavior after the fix would be always throw a TypeError. + +MF: I support this change. And I would like it to set precedent for similar changes or design in future. methods. + +SYG: Thanks. I hope for KG’s spicy presentation later, about stopping coercing things, to set an even stronger precedent, but we will see. + +DLM: Explicit +1 for #120. + +RPR: Just to confirm, for the notes, that we record that as consensus for #120. + +### [PR126](https://github.com/tc39/proposal-resizablearraybuffer/pull/126) Normative: Correct buffer limit checks in `TypedArray.p.copyWithin` + +SYG: Next one is 126. Another diligent and detailed reviewer ABL, found some arithmetic bugs in the fix loop bounds. Before I show the fix, to build context, the gist of the bug is that now that with resizing buffers, some TypedArrays be length-tracking. If you make a TypedArray that is backed by a resizable buffer and don’t explicitly give it a length at construction time, the length becomes automatic and is recomputed every time you ask for it, depending on the size of the underlying buffer. Given that dTypedArrays can be length tracking, and given that user code can shrink the underlying resizable buffer, in addition to detaching it in the same places where user code can be called, like through argument coercion, given that we have to worry about shrinkage and detached, plus given the TypedArrays can be byte offsets, after we reload the length, after user code is run to recompute the bounds, there were bugs and copy within method on TypedArray, that prototype that for got the add the offset after reloading the length. That’s the gist of the bug + +SYG: The fix is, as follows here. Let me try to page this back in. On the top of the line, you see – on the top screenshot, on line number 40132 that is the length reload. After using code is called, argument coercion is all done. We reload the length and recompute the bounds to do the copy. The copy forgot to add the byte offset and the fixed green line here in the difficult V, the byte offset is added. It’s now the limit. And the second below for the DIV shows the NaN from the previous alias . . . this is not coherent because it forgets to add the byte offset. And I believe some of the AOs used here, the things need to be in bounds, so without adding, things could have been out of bounds and the spec wasc coherent. If you implemented this literally, things would crash, I think. But I think for these TypedArrays method, these are not implemented literally from the spec text. + +SYG: Okay. So before moving on, any concerns to this one? + +DLM: Explicit support for #126 + +RPR: Great. Are there any objections? No. Okay. I think we have consensus on 126 as well + +SYG: Thank you + +### [PR127](https://github.com/tc39/proposal-resizablearraybuffer/pull/127) Normative: Correct buffer limit checks in `TypedArray.p.slice` + +SYG: 127, which is kind of similar. This is TypedArray that protocol.slice, where the context for the bug is very similar, where we need to reload the length. And recompute the limit so that we don’t slice out of bounds. There was a mistake where I took the min of the wrong thing, the target byte index should have been factored out. And this is data fixed. I think it’s also the case that if you implement this literally, this would also crash, but I don’t really remember. Any concern about this one? + +RPR: Any positives or – any support? Or objections for #127? + +DLM: Explicit support for 127. + +SYG: That was a straightforward bug fix. I will take the silence as no objections. These are already reflected in the upstream PR against 262. + +RPR: Agreed. We have consensus on 127 with a + 1 from Dan Minor (DLM). + +### Fix loop bounds in `ArrayBuffer.p.slice` + +SYG: Thank you, DLM. There’s a no issue number, but also in the same vein. This particularly is in `ArrayBuffer.prototype.slice`. Where I recompute the length, but then I – so currently, in the spec draft, you – like, you compute the limits, and then you call out the copy data block bytes. I found the issue that copy data block bytes, all inputs in bound and currently the relength computation doesn’t necessarily mean that the input to copy data block bytes will be in bounds. It needs to be under consideration as the suggested fix shows. This is just to take that fix. + +SYG: The current spec text is technically incoherent. This can only happen when the underlying ArrayBuffer is shrunk. When this is equal to current length, no call to copy data block bytes will be in bound and it should not be called. So this is that fix. It does not have an issue number, but it is reflected in the upstream PR. + +SYG: Consensus here before moving? + +RPR: Any objections to this part? No. + +SYG: Okay. Sounds good. + +### Refactored `SharedArrayBuffer.p.grow` + +SYG: Final one is really a normative change. They’re are, I think it’s not a normative change, I don’t want to think too hard about it. But basically, this is more of a FYI to folks reading the spec, that in the upstream PR for 262, the spec text for share was significantly refactored. The current proposal – the spec text kind of directly manipulates shared memory events in a way that – I don’t think it does not read. And the refactor is the spec text version of the pseudocode which is more or less obvious way of how you would implement grow anyway. You would load the length atomically and go into a loop that checks if you still need to grow, or if you raced with another grow and no longer grow and tried to grow with a compared exchange when you succeeded, thank you, return. So if youd are reading the spec and find that the growth looks significantly different, keep in mind this pseudocode block, this is what the new spec text is intended to reflect and how you would implement it in any way. I don’t think there’s too many ways to implement this. And I don’t think this is normative. But don’t press me on that, in that there are different shared memory events then with the spec draft says, but I think the allowed outcomes are still the same. + +SYG: So with that, any questions on that before I move on okay. + +RPR: About nothing in the queue + +SYG: Here are the status of shipping and Chrome and Safari. The remaining – there are a lot of tests already in Test262. I think there’s some surface API level test and about 80 to 90 tests for a bunch of these methods. These Array prototype and TypedArray prototype methods, and ArrayBuffer type method, they they interact with resizable buffers in Test262. But they are in staging. I was hoping to migrate them out before this meeting, but I ran out of time. The upstream PR is 3116. So I think I will delay asking for Stage 4 until next meeting. Because otherwise, it would be kind of contingent on these things being migrated out of staging, and that probably doesn’t make any sense. But yeah. We can discuss that quickly right now. + +SYG: I am happy to wait until next meeting, but it’s unlikely to be anything new to say. Are folks – I will ask the more controversial one: do folks have concerns about getting conditional Stage 4 now given that Test262 is not migrated out of staging? + +DLM: (from queue) “I think it would be better to wait for next meeting for stage 4” + +MF: (from queue) better to wait for Stage 4 + +SYG: Okay. I see things on the queue saying better to wait for Stage 4. I agree. That’s fine. That also gives more time for reviews about 3016 for this kind of corner-case arithmetic bugs. It’s large, PR #3116. All right. Thanks for the consensus on the normative fixes. They should already be incorporated in #3116. I will come back next meeting to propose Stage 4. + +### Summary and conclusion + +Consensus was achieved for all normative fixes. + +- #120 - Do a single detach check after coercing argument of ArrayBuffer.prototype.resize +- #126 - Fix loop bounds arithmetic on %TypedArray%.prototype.copyWithin due to shrinkage in argument coercion +- #127 - Fix loop bounds arithmetic on %TypedArray%.prototype.slice due to shrinkage in argument coercion +- Do not do out-of-bounds copy in ArrayBuffer.prototype.slice due to shrinkage in argument coercion + +This proposal is shipping in Chrome and Safari. There are some test262 tests, but some are in `staging/`, and coverage is not 100% for these latest fixes. + +The committee discussed being “conditional Stage 4” (on reviews and tests), but multiple people voiced preference for a more cautious approach of sticking with Stage 3 for now. + +Stage 4 is deferred until a future meeting pending the normative PR on ECMA262 being reviewed and tests being moved out of staging as well as test coverage for all of the normative issues fixed. + +## Array Grouping for Stage 3 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-array-grouping) +- [spec presented](https://tc39.es/proposal-array-grouping/) + +JHD: Alrighty. So . . . as was presented in the previous meeting, the proposal is now `Object.groupBy` and `Map.groupBy`. The first argument is iterable. And they take a callback function to determine now to group the results that returns a key for the object or the Map. + +JHD: The only open question is about the naming of these methods. But and that’s only – that was brought up after the last plenary, by filing an issue, by a delegate, and other than that bikeshed discussion, everyone who said that they would review it has reviewed it, except I am not sure if – if MLS is online or anything from the webkit team was able to take a look. + +MLS: I am here. + +JHD: Cool. Were you able to take a look or someone else able to take a look at the proposal? + +MLS: I don’t think we have looked at this. I can check with colleagues and get back to you later in the meeting + +JHD: Okay. + +MLS: Okay. That’s fine. Thank you. + +JHD: But assuming that that review does go well, then the only remaining issue would be the naming. And I don’t know if you want to pull [issue #57](https://github.com/tc39/proposal-array-grouping/issues/57) up for me + +JHD: I’ll do my best to summarize the issue, but, ACE, if you’re around, feel free to jump in if I miss anything. Essentially, that the concern addressed is that Object.group sounds a little like a map as opposed to building one: It operates on an argument and then, you know, it notes that Object.create and Object.fromEntries work -- they do not operate on an Object argument. They produce an object. But because of the word create and from, that these are, you know -- these are the exceptions and they, like, strongly convey what they’re doing. And so it sounds like some Bloomberg folks came up with some alternatives from grouping, by Grouping.from, and there’s been some back and forth. I pointed out that promise.all and the other dominators produce a promise and awry.from produces an array and so on. So my personal preference as well as JRL, so the champion’s preference is to stick with the name `groupBy`, but it’s, you know -- this isn’t -- for me at least, this isn’t something I’m strongly attached to. But the functionality is more important than the exact name. So if the committee has things that’s important enough -- thinks it’s important enough to change the name at this point, then we can do that. So I just wanted to open it to the room if anyone had alternative thoughts or a strong argument in either direction. + +RPR: Okay at the moment, there’s nothing on the queue. Which is surprising for a naming issue. Hopefully good. EAO has +1 to the current groupBy. And ACE? + +ACE: Yeah, I do stand by my original thing, but with all things considered, JRL, and your argument that the precedent for the name of this function being groupBy, convinces me that it’s the right thing overall, all things considered, yeah, so please consider it closed from my perspective. We did discuss this within Bloomberg and came to that conclusion. + +RPR: So +1 for groupBy from KG and CDA and CM and everyone at Agoric, and SYS says, sounds good to him. + +JHD: Okay. Well, that’s convenient for me certainly. All right, well, that’s great. So then I guess the next thing is Michael, I was going to ask for conditional Stage 3 on your -- you or someone on your team’ review before the end of the meeting, but if you prefer, I can wait to ask for that until after you’ve reviewed. + +RPR: Yeah, let’s go conditional advancement request. + +JHD: Okay. Then I’ll ask the room. Can we go for conditional Stage 3 for this proposal? The condition being that Michael or someone on his team has been able to -- has successfully reviewed this and any issues brought up have been addressed? + +RPR: All right, so this is a conditional question for advancement. + +DE: I support conditional advancement, if the only thing is missing reviews, given that we don’t have any issues that we know about. + +SYG: A question from Michael: I have reviewed this, and I would actually like to ship it fairly soon. There is some demand for this method. So given that it’s conditional and you haven’t yet reviewed it, would you like for some synchronization on our end before I ship it or are you also comfortable with just shipping it? + +MLS: Well, remember, we shipped the prior thing and then unshipped it. I think we’re just at motivated as you. If it hasn’t been reviewed, I will review it tonight. + +JHD: Perfect, thank you. + +RPR: Excellent. So plus one to the conditional Stage 3 from MS, EAO and also Stage 3 conditional support from CDA. + +JHD: Thanks, everyone. + +RPR We have consensus on the conditional Stage 3. Is there more to talk about on this topic, JHD? + +JHD: No, that’s it. Thank you. + +RPR: Okay, that was very quick. All right. Let’s just see -- oh, yes, so could we have a summary for the note takers, please. + +### Summary and conclusion + +1. We have consensus to keep the name ‘groupBy’ +2. Stage 3 is conditionally approved pending the remaining review from MLS (tonight). No particular issues are anticipated. + +## Public calendar continuation + +CDA: I just wanted to say with regard to determining if your meeting is going on the public calendar or not: just a reminder, nothing the permanent, so you don’t have to agonize on the decisions. If you wanted to include on the public calendar, you can take it off later. If you don’t want to include it now, you can always add it later. That’s all. + +## Deferred import evaluation + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-defer-import-eval/) +- [slides](https://docs.google.com/presentation/d/1rSsVsFsnXQZ8pEGFwAGiVbVqndr4DHEUqTGEM9Au0_4/edit#slide=id.p) +- [spec](https://tc39.es/proposal-defer-import-eval/) + +NRO: The deferred import evaluation for Stage 2. This was a proposal originally done by GB and the past few months I’ve been working with Guy on this. So just recaps is to provide some -- like, a low optimizing to be as fast as possible given some very strong constraints as we need to respect because of how modules work. So what are the circumstances? We’re mostly looking how to optimize some large compiler bases where, like, there are a lot of -- code bases, where there are a lot of modules. The module loading is an initial part of the start-up cost. And we want to make this work without forcing everything to become async. We want this to be as easy as possible to maintain. And, like, there is no one size fit all answer, like in some cases, you might be okay with just deferring everything, using a dynamic import. In other cases you are okay with getting maybe some less benefits. Like, trading between, like, getting better ergonomics in exchange. + +NRO: As I mentioned, like, we can have some sort of lazy imports dynamic import. So say you have this code where you import some module who has the initial evaluation cost, because you know the value from this model that maybe you are rarely using, it might not be actually called at all depending on how the program is being run, so you might want to defer this start-up cost, so what you do is that instead of using the import, imagine the first line is latent, you just use a dynamic import by using in the function where you actually need this value. And you need to mark your function as `async` and everything becomes asynchronous and, like, it’s very viral, and, like, you need the add this sync everywhere, even if you’re not actually meaningful everywhere, but it just -- just because you want it to, like, lazily log some very deep interdependency. + +NRO: And this proposal, deferred import evaluation, tries to solve this, so improve start-up performance without forcing you to change your API. + +NRO: So, what can be deferred? Well, when we talk about modules, we’ve seen five phases, five phases of a single module. So we load the module, and these might be asynchronous because there might be a network request or it might happen sequentially, because if there are some systems, some platforms actually use synchronous operation modules. And then we have module parsing, which is needed to find all dependencies and log all of them. And finally, we have what this proposal does: try to defer module evaluation, so the last part, so we can do all the synchronous stuff ahead of time and do all the parsing needed to recur all interdependencies, because this can be a lot of the modules to do all of this and just the evaluation part. And this is like if you’re more familiar with how we presented the various phases of loading modules in other presentations related to the modules, this is basically the same, just that phases -- with those phases. So we’re not deferring everything. Like, that’s this -- does this still bring some significant improvements? + +NRO: I just went through the slides from the presentation proposal some months ago by GB. YSV originally did some analysis on Firefox internal code between JavaScript, and she found that almost half of the time is spent on loading and parsing the module, and the rest of the time is spent on the initial evaluation of this module. So this will basically save, like, in this specific case, half of the time. And there are other examples, like, for example, in Babel, we -- Babel is based on many things and for everything you need to compile you need a different package and you need to initialize the plugins and we need to set up some helpers and some things we do while logging the module, and this was very expensive, especially considering now people don’t compile every feature using Babel for (indiscernible). So many of the plugins were not used all time, and we found by lazily initializing all the start-up logic we could -- we could improve certain times in many Babel set-ups, and there are also other examples, for example, Ingest years ago moved to lazily loading some dependencies, and so that they were only loading when actually needed for the that’s that was running and they actually found some initial improvements. + +NRO: And, okay, so how are we proposing to achieve this? What is the API we are thinking of? Import statements would have the `import defer` and next space as the name syntax. This follows the syntax follows what has already been established by the `import` space proposal where we have to modify the import key one. And this `import` statement would load the module, log the dependencies and would not actually evaluate them until the property on the name space object is read. So in this case, the value would trigger the evaluation of the model of the same value property access. + +NRO: Okay. Okay, so I’m going to -- so it’s assessing parts, like, not named imports, because we don’t really want to have, like, just accessing a bundle to trigger effects. We want to constrain these to property accesses, so we’re constraining the API to only work with model bases, so we are always triggering properties from an object. And so, for example, if we have a module that imports A and B and B is deferred and we evaluate this module graph, A is not deferred so we see the console log and B is not evaluated because deferred, and we start evaluating the module, so we see right now our console and then we see A1 in our console. Then when we access the property from the main spaces of deferred module, these will trigger evaluation of B and we see B evaluated and finally we keep evaluating the top-level module. And so our initial example that we saw how today we can render to a factor to use dynamic import avoid the initial -- paying the initial start-up cost when it’s maybe not necessary, we can now use the defer space syntax. + +NRO: There is a problem. And the problem is that modules are not always synchronous because we have top level `await`. And property access needs to be synchronous. So there are different solutions to this. One solution would be to just not support top level `await` and throw if there is a top level `await` anywhere. But we don’t really want to split the modules in model synchronous and modules that can be deferred, so what we’re proposing to do is to evaluate asynchronous modules even if they’re part of the deferred subgraph. + +NRO: Let’s say we have this module graph with our entry point at the top and you can see the dashed arrows marking deferred imports and a module using top level `await`. When we start evaluating the module … We first look at all -- like, we first detect which are all the modules need to be evalwhited, so there is, well, you’re dependency, but there is also an asynchronous model in the deferred graph, so we need to evaluate that model together with the dependencies, so we evaluate model 1 and then 2 and 3 and finally we can evaluate the top level model. Then let’s say that at some point, something triggers the evaluation deferred graph, so then at this point, we find its dependencies. The dependencies, like, the entry point of this deferred subgraph and we start evaluating them. One of them has already been executed and that’s okay. It can already happen today that when evaluating a module one fend sea (?) has been evaluated for those reasons, and then we go ahead and evaluate the top level model. + +NRO: So this is the solution. Where `import defer` is used for a module with top level await, something must happen ahead of time. It’s deterministic, like, you know that only the asynchronous part happens ahead of time. But it still allows you to defer the evaluation of all the synchronous parts. + +NRO: In the first slides I mentioned how we have dynamic `import` to defer loading and parsing, and this proposal is just about deferring evaluation. And that’s the only guarantee we can make. There are some environments and platforms in which this proposal would unlock deferred -- well, deferred loading and parsing. And this is -- this can happen when in cases when loading modules are asynchronous such as environments, and where we can generate some metadata so that we know, like, ahead of time generate some metadata so we know which are synchronous, which modules have some errors, and which modules can actually be deferred, so, like, for example, there might be a built to generate the metadata and then at run time can just query the metadata and if a module can be fully deferred with the dependencies and can fully skip this module. And some examples are internal code browsers or loading the browsers and cached, there are several times where have a deployed comment or push commit where they already perform some ahead-of-time analysis of the deployed code. There is compiled CJS, which is what it does. There is CJS and the way we do our defers is actually defer the required codes and these would still match semantics of proposal or other examples of CJS used together. + +NRO: So what are some current language properties and how is the proposal change them? Right now models are guaranteed to execute after the dependencies if there are not cycles. And this proposal, well - Okay, like, these properties will hold except obviously if the module is explicitly marked as deferred. It’s already known that some dependencies of a model might have been evaluated before evaluating the module that imports them because they will be, like, imported from somewhere else. And this proposal, the way it handles the evaluation, it’s possible because already there is no guarantee about the order in which, like, a module and its dependencies are evaluated. However, like, I introduce, it makes it slightly more common to see this happening where dependencies have been evaluated in case you’re being deferred and your dependencies uses `await`. + +NRO: And there is a property that we lose that is that right now, module code is always evaluated at the top level of an execution stack. In this example when CJS query, let’s say we have this query function that cannot be called during a code to the update function. And like update we receive a callback, so this not concurrent. Query must not be called to the callback. And this model is now safe because queries always call with a fresh stack, so it doesn’t need to check that it’s not being evaluated doing update. While if B -- if the evaluation of B can be triggered synchronously from somewhere else, we lose the property. So B will need the introduce checks to check code doing update, then do not call `query`. And thanks to the SES group for discovering this issue. Like, this can already happen in other cases, like, for example, in using CJS and our speculation is just because developers are not relying on this existing language property. + +NRO: So this proposal is now what I presented so far, but there are some possible extensions we’re thinking of -- about. And the most important one is about deferred reexports. And like the idea is that you could somehow have modules that have deferred export for the modules and those other models are only evaluated when actually needed. So in this example in the slides, the `ns.foo` access would only trigger access of A and B and not actually evaluate CC, or maybe even just with static imports, like, with named imports, so maybe this could evaluate ahead of time A and B, and completely skip evaluation of C. And like this is something that still is very much up in the air. There are no clear semantics yet, but something we’re thinking about. + +NRO: Okay, so can you already try this somewhere? Well, we have an experimental implementation. Like, if you can find a link to the slides and this link, there are some tests and you can see how this proposal works. And we are working already on experimental webpack implementation. The goal is not to ship something to web Browsers but to understand how complex it is to implement this proposal. And the proposal does not currently -- the implementation doesn’t currently exactly match the semantics, we’re work on it. And lastly, with we have a tool that follows step-by-step the module evaluation in the stack to see how the state of different models change when evaluating. And, like, it’s this -- this very much helped me when I started to go through this because it’s a particular complex part of it. And -- well, and as I mentioned, we have a spec. So you can check it out if you’re interested. And, yeah, that’s all. Do you want to go to the queue? + +CDA: Yeah, we have quite a large queue, so let’s get right into it. First up is Shu. + +SYG: Yeah, can I see the profile again? Okay, so the intention here is that the highlighted stuff there is about evaluating the top level? + +NRO: So could you repeat it. The audio was not -- + +SYG: It’s the intention of the slide to show how much time is spent evaluating a module top level? + +NRO: Yes, like, this was, like, some specific models internal to firefox that she used during her analysis, and these blue parts are parts that could potentially be deferred. + +SYG: Okay. Okay, I see. Okay, all right, I think that clears it up. Thanks. + +KG: Yeah, the top level `await` thing does seem like a problem. I agree with your decision to not make it throw. Mostly just because, like, you don’t expect to add in top-level `await` to a module to be a breaking change. But it seems quite costly that top-level `await`, though not necessarily a breaking change, does cause the graph to suddenly become eagerly evaluated when previously it would not have been. I don’t know -- I think that it limits the utility of this feature, like, pretty substantially. I don’t see an alternative. But it makes me less excited about the feature. A lot less excited about the feature. + +NRO: So note that top-level `await` doesn’t pollute the whole deferred graph, but it only forces the evaluation of the synchronous model itself together with its dependencies. In this example, the module to red or brown can still be deferred even if it has a synchronous dependency. So, like, it’s not -- it’s not -- + +KG: Sorry, I didn’t mean to imply otherwise. I agree with that, but, like, it does cause the module with top-level `await` to be eagerly evaluated, and its top level dependencies, and that makes the feature a lot less useful, because you get a lot less deferring, and it, I guess, is probably still worth it, but it makes it a lot closer to not being worth it for me. + +EAO: Just very briefly this doesn’t mention anything around top-level await include the module more eager than that they currently are. They wouldn’t be able to be deferred as known top-level `await` content would be. + +KG: I agree. But the whole point of adding this feature is that we want to defer things, and we actually don’t get to defer things. We get to defer, like, some limited subset of things, and that’s just less good and makes the feature less valuable, because in fact we aren’t deferring the whole graph. We are deferring only potentially a small portion of the graph, so the feature is less useful + +RPR: So I think Kevin’s point is fair that this does reduce the amount of the graph that can be made lazy. In practice, in the system that we have in Bloomberg, we have supported the equivalent of top-level `await` for nearly 10 years, and the number of places it actually gets used is very, very small. It’s a power tool that you only reach for when you need to, because it does have these implications on loading. I think we can see this in the log. We can see what happens in the wild, but so far, I’ve not seen evidence that TLA (top-level await) is everywhere, so on that balance of how much this reduces the feature’s value, I believe it’s likely to be a small loss, not a large loss. + +DE: Going to that a little bit more, we’ve seen that top-level await in the few cases we see it is closer to the leaves of the model graph, which is the case that this algorithm graph does. All you have to make eager is all those leaves. I think it’s important that we evaluate this thing about how can benefit empirically based on how it works in larger programs and we have seen that it’s very useful in larger programs, in many different environments. So I don’t think that’s going to be invalidated by those programs making tons of additional use of top-level `await` in a way that would block this. But maybe I’m missing something. + +SYG: Is there an -- well, incompatibilities, I said incompatibility in the queue, but maybe that’s not quite the right word. But given that the existing technique is dynamic import, which is async, which is viral, if you are currently trying to defer some stuff by making it async, by biting the bullet of “I’m going to color my functions” and actually have it virally propagate out and when this comes along, because of the top-level `await` restriction, if you have already made some stuff async, is this -- does this not compose as well as you might hope and then people can’t actually take advantage of the deferrals because they already made some stuff async in the current world? Does that question make sense? + +RPR: I think that’s not what we’ve seen in Bloomberg, which is that when with you use dynamic import, that fully cuts off --yea separates out the chunks and the portions to be loaded. So it’s never a loss to compose this with the traditional dynamic import. + +SYG: What I’m saying is if you’re dynamically importing something that is currently synchronous all these dependencies are synchronous and you’re like I want to make some of the dependencies asynchronously so I could asynchronously use them in some contexts that asynchronous and deferral becomes the thing and you have to defer it to though you have convert those back to async to convert it back to a deferral? + +DE: You never have to make the thing synchronous, obviously. I think LCA’s answer gets at the question you’re asking also. Oh, Luca had to go, sorry. + +RPR: Next we have SFC. + +SFC: Yeah, I just wanted to note that my understanding of the -- what was previously called the import reflection proposal with Wasm is one case that I believe we use async modules, and I’m a little concerned about the narrative that all async modules are a power user feature, because I don’t think -- that seems like a case that my understanding is that async modules are the state of the art for loading Wasm, and that’s definitely a case where being deferred where that would be desirable. So I had another topic later on in the queue as well about this, but I just wanted to flag that. + +NRO: So with the current imperfection proposal model, like, source imports of the modules are still synchronous, however, like, once when they will have the full integration between ESM and Wasm, they will be asynchronous. I don’t know if it’s possible for Wasm to provide some synchronous evaluation capabilities, but in that case, the integration could be built on top of that allowing to defer eventually Wasm modules too. With the way Wasm is executed right now, yes, it’s asynchronous and the Wasm part would not be deferred. + +DE: So concretely, it really is possible to make synchronous Wasm evaluation. It’s already part of the Wasm API. We’ve made them asynchronous by default where it might take recompilation to reinstate a Wasm module. However, I think it is important to enable deferred loading for Wasm modules, so I think we can consider this -- work out how to make this work during Stage 2, this interaction. Yeah, the decision to make Wasm modules -- async was based on part of the implementation in JSC, which always used the baseline compiler and they added an interpreter. The baseline compiler coded in some knowledge about what baselines were used and where the interpreter doesn’t, so it may be possible to remove this restriction. + +KG: In practice, WASM is not able to be loaded synchronously. Chrome limits it to 4kb. They are not lifted it indefinitely. I think the proposal is 8 megs. Yeah, Wasm in practice can’t do the asynchronous thing. + +DE: KG, you’re confusing module compilation with module instantiation. Chrome’s limit is all about the compilation. You can’t synchronously compile larger than a certain amount. But it’s only a JSC thing. + +KG: I’ll take your word for it. + +DE: It’s part of the fetching and parsing stage, which this proposal does not aim to defer. I mean, not on the web at least. + +CDA: Okay. We have less than 15 minutes for this item, so please be -- + +SYG: Can I interject. I was going to do a response, but I -- before we move on to the current topic. + +CDA: Please go ahead. + +SYG: On the -- so, DE, on the -- so the profile output that is in the slides show a line called script emit, which I assume is lazy compilation, and that was folded into top-level module evaluation time in the analysis, I presume. So, like, I’m not exactly sure how the champions are thinking about the performance characteristics here. Obviously more time is saved if you are doing lazy evaluation -- sorry, lazy compilation here, like, you’re just parsing, you’re saving the offsets and compiling it for the first time when you run it. The discussion around WASM makes that less clear to me. Like, what is the minimum amount of things that are deferred to make it worth it, if you’re expecting a module where you’re always going to pre-compile, then you’re just evaluating, that is smaller than what is currently touted as the potential speedups shown in profile. + +NRO: Okay. Like, my answer in Wasm was based on the existing expect load graph for how Wasm is evaluated. But ideally engines would be deferred as much as can be synchronously deferred. An example of something that could happen is when you have a deferred import model, you still -- For an adjusted model, you can still start compiling the module, generating by code in some Chrome Trend (?), and then once you actually trigger evaluation of the module, like, lock, wait until the file is compiled and can be executed. Or, like, it will also be possible to just defer the full compilation till later and, like, synchronous block a little bit more. Like, it’s a matter of how much, like, engines are comfortable with blocking to execute later. + +SYG: Yeah, but that plays into, like -- the Wasm -- the reason the Wasm module compilation has this async requirement is because some -- I don’t personally necessarily agree with this, but that don’t block the main threshold (?). So, like, I don’t think it’s purely a choice of per engine. We probably want some coordination around this. Probably just don’t want really long pause times if you want to defer some Wasm modules. + +DE: So, SYG, I think this is the same confusion that I was trying to address with Kevin’s point. Nobody’s proposing that compilation be changed in terms of how it’s done. The idea is that fetching and parsing remains an asynchronous operation that’s done blockingly before anything runs. This allows everything to run in parallel. As you know, it’s important for parsing to be able to run off the main thread as well, requiring the synchronicity to be suboptimal. But for JavaScript, overall, we’ve seen, I think -- we don’t have, like, super strong numbers on this, so maybe that’s why it wasn’t in the presentation, but roughly there’s a split, maybe 50/50, due to in the time saved due to fetching and lazy parsing in environments as NRO mentioned, which are able to avoid it, unlike the web. And for evaluating, which takes a significant amount of time. So in environments where you are parsing and compiling otherwise and eagerly, I would -- I don’t know whether this holds up, but it’s, you know, lazy parsing makes sense in the Wasm context, and it’s just the (inaudible) in the compiling operation. Yeah. + +DLM: Yeah, I’ll be brief. I suppose it’s not very surprising given that Yulia was involved in the original proposal and there is Firefox in the slides, but this is something we would definitely be happy to see advance to Stage 2. In particular, we already support our own version of lazy module loading that’s used heavily in the front end of Firefox. I asked that team for feedback on this proposal and they were quite favorable. And there’s obviously other people that are using something similar, and I think that it would be great to see this advance and be able to coordinate on tooling and things like that in the future. Thank you. + +CDA: Thanks. We have a little under 10 minutes left. Next is ACE. + +ACE: Yeah, so something else about our use at Bloomberg. So we have an implementation of this not, using the syntax but it is available in ESM, and the real amazing thing for us is this feature already exists in other module systems and for team’s already depending on it, e.g. if using Common JS. And one of the things that prevents them from using moving to ESM is a lack of this feature, because dynamic import won’t work with the current code. As we’ve discussed, it can be too big to go full async and color the graph that way. So it’s been, yeah, just great that we’re having this thing in ESM allowing code to move to ESM in a way, we’ve implemented this, but we’re trying to align the implementation and only gradually roll out this feature so we can keep aligning with this proposal hoping that it becomes standard. And I don’t think that’s specific to Bloomberg. Like, in the ecosystem, existing bundlers and runtimes already have features like this which allow you to synchronously import and they all have slightly different semantics, having this standardized in one way, I think, will be a really big win. + +CDA: All right. RBN is next. + +RBN: Yeah, could you go back to the slide discussing top-level `await`, the two options. So the concern that I have here with the first option, this -- the option of throwing, is that if you limit import defer to only modules that do not have top-level await, there is no guarantee that any code that you write will continue to work when you do any type of package upgrade. Because any third-party package you might use could decide to start using top-level `await` in ESmodule and then all of your import defers start throwing. And while you hopefully would discover this during development, you could be in a writing package that is using a peer dependency with another package that gets installed, and anything in that dependency change could cause an issue. I think throwing is not really viable. If the idea is just to do a best-effort optimization or the idea is to try to optimize performance to not load things until you absolutely need them, then I think eager evaluation of async modules and that best effort for import defer is probably the only option that’s viable. + +JHD: I have a couple thoughts here. So one is that the -- like, I understand why you have to use `import *`, because it’s super weird and magic to have accessing a variable have effects. But one of the reasons -- like, `import *` is something that I’ve generally considered and found to be considered “gross”, for lack of a more precise word, sloppily and implicit and prevents a lot of static analysis. And, you know, it makes tree shaking more difficult things like that. And the original -- some of the original design goals of ESM were -- like, or a lot of them, seems to be “let’s make everything as static as possible”, so a lot of decisions were made that we might not have made if we had expected that eventually we were going to have dynamic import and deferred imports and so on and so forth. So it’s sort of -- it just kind of feels like it leaves us in an awkward position, and I’m not sure if anyone still shares those original design goals and wants things to be maximally static or if we’ve just decided that’s not important anymore. And then related is there was an attempt to make a proposal for conditional static imports, and it kind of seems like this import deferral would be a way to do that, where you do an import defer and then you just conditionally access to property or not. So, yeah, I don’t know, I’m just -- I just kind of wanted to bring this all up. This is not a straightforward, obvious win for me. It’s -- like, the benefit that is being sought is valuable, but I’m just not -- I’m not super convinced on the limitations, fitting in the syntax and, you know, combining it with the mental model of ESM. I just wanted to bring that up. + +NRO: So when it casts a static analysis, like, if you use, like, your name space object as name space.property, like, bundlers are already able to statically analyze that and tree shake that. The problem is when code starts passing the namespace object around or when you, like, do some, like, computed properties and it more dynamic things. Like, this proposal doesn’t encourage doing this more dynamic things. If what you need is just the imports, then all your usages of the namespace object would be just simple property accesses. And it’s similar to how dynamic import, yes, it’s dynamic. In many cases to scan static analysis, because in many cases we just pass a string to dynamic import. + +KG: Just briefly. JHD, I agree this does make things a little more dynamic, but to me the most important property is that exports are not dynamic and that property continues to hold. As long as you don’t use computed properties to access things of on the name space object, it’s not like, less static in an important way as far as I’m concerned. + +JHD: Yeah, I mean, I agree if you’re only using, like, dotted properties and you’re not passing around the namespace object, then the dynamism is not really an issue. + +CDA: All right, finally JWK. + +JWK: In the webpack implementation, we only generate the namespace object if the namespace object is used in an unanalyzable way, for example, the computed property access. So, yes, this doesn’t make the static analysis harder. + +JWK: Also, I want to share, we are already using this by the webpack implementation in our project and found the result is very good. We can easily defer the module that has heavy initialization costs. We also find the namespace restriction is a little bit not easy to use, but I’m okay if we have this restriction for now. + +NRO: Okay. Thank you. So if there is nothing else, I would like to ask if you have any objections to Stage 2. + +CDA: Do we have support for Stage 2? We have a +1 for Stage 2 from Mozilla. JWK supports Stage 2, and so does ACE and DE from Bloomberg. + +KG: Support with a caveat. So a lot of times we take things going to Stage 2 as essentially promising to do them. I want to be explicit that that is not what’s happening here. That there are, like, still genuine very real questions about viability around top-level await and the Wasm story and so on, which will need to come to a satisfying conclusion to advance to Stage 3, and it is not in my mind certain that we will be able to come to a satisfying conclusion, so it is not like necessarily the case that this will ever be able to advance further. I’m hopeful that it will, but, like, there are remaining significant questions. + +CDA: Okay. We have a -- hang on. I’m behind in the queue slightly. We have a +1 for Stage 2 from Chip, and then we have BSH also have concerns. We are past time, But JWK , you have a clarifying question? + +JWK: Yeah. I found a lot of discussion is about top-level await and the WebAssembly, and I want to clarify that we don’t have experience in how this proposal interacts with top-level `await` or WebAssembly. Actually, we banned top-level `await` in the current implementation because the semantics in this slide has not been there while I was implementing that. + +CDA: All right. Bradford, did you want to speak? You have plus one from Bradford, but with concerns. + +BSH: We’re over time, so I suppose -- I think some of my concerns at least are the same as Kevin’s. + +CDA: Okay, thank you. Shu, finally. + +SYG: Yeah, I’m not going to block Stage 2. I want to articulate I’ve been trying in the matrix Chat rings, I want to articulate what I want to explore during Stage 2. Which is, like, I don’t want to -- I think top-level thing is I would like to better understand are there performance footguns here because of the top-level `await` and Wasm story. Like, the performance story around deferring stuff today is to make it async, if a new performance story to defer things is to in fact not make it async, like, just as a first-order thing to say to developers that seems bad, and I want to better understand are there concerns there. + +CDA: All right. Thank you, everyone. We -- Nicolo, you have Stage 2. + +NRO: Thank you, everyone. + +### Summary + +The proposal now uses the `import defer * as ns from "mod"` syntax to import modules without evaluating them, and the evaluation will happen synchronously when accessing properties on the namespace object. To ensure synchronous evaluation while still maintaining compatibility with top-level await, modules containing top-level await are still eagerly evaluated (together with their dependencies). + +The main points of the discussion were: + +- How does this interact with WASM modules? The current WASM-ESM integration marks them as async. However, it may be possible to make their evaluation/instantiation synchronous and thus allow them to be deferred. +- So far the performance recommendation has been to "make things async" using dynamic import(). With this proposal, using `await` at the top-level of a module would prevent it from being deferred, going against that recommendation. How can we reconcile this? +- Forcing the namespace imports syntax may make it less statically analyzable for tools. However, it's likely that current heuristics (e.g., “it works as long as all accesses are . and not []”) are already good enough for this type of static analysis. Forcing the namespace syntax is also a compromise for ergonomics/orthogonality. + +### Conclusion + +Deferred imports reached Stage 2. Before Stage 3, the champions need to investigate how this interacts with WebAssembly modules and how the "use dynamic import (possibly with top-level await) to optimize performance" and "use `import defer` (and avoid top-level await) to optimize performance" stories fit together. + +## Iterator Helpers: small optimisation to avoid String wrapper objects + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-iterator-helpers) +- [PR](https://github.com/tc39/proposal-iterator-helpers/pull/281) +- [slides](https://docs.google.com/presentation/d/1TzXjuzYhp-mNx_tHfl3-_3t9UFWRpkx26aYUtdLrb7A) + +MF: Iterator helpers is at Stage 3. We had some implementation feedback from ABL and we’d like to make a change in response to it. As background for this change, know that iterator helpers adds two methods that accept not just iterators, like most of the helpers, but also iterables. These are iterator.from and Iterator.prototype.flatMap. And Iterator.from and Iterator.prototype.flatMap differ slightly in what they do with one iterable in particular: strings. + +MF: Because of that similarity, though, we chose just to specify it using a single AO that they share, and the way we did that is for iterator.from to take any strings that it receives and make them objects, because that’s the difference between Iterator.from and flatMap. We didn’t realize at the time that, because of the way we wanted to specify it, that this causes an observable string object to be created, which you can observe if you do something very strange. So ABL would like us to -- well, has given implementation feedback that we should instead specify it differently in a way that still has the same effects but does not make the string object. So that is what I have done in this pull request. If you want to see it you can go to #281 on the iterator helpers proposal repo. And that’s my full presentation. It’s a very small change that changes observability for very obscure code that’s basically looking for the string object, and just changes a string object to a string primitive. + +CDA: All right. We have a plus one from KG. No need to speak. We also have a plus one from DM, also no need to speak. Another plus one from DE. No need to speak. Any other comments or support for this change? Okay. Thank you very much. We have consensus on the optimization. + +### Summary and Conclusion + +An Update and small improvement has been proposed by MF + +- [proposal](https://github.com/tc39/proposal-iterator-helpers) +- [PR](https://github.com/tc39/proposal-iterator-helpers/pull/281) +- [slides](https://docs.google.com/presentation/d/1TzXjuzYhp-mNx_tHfl3-_3t9UFWRpkx26aYUtdLrb7A) + +MF: Iterator helpers is at Stage 3. We had some implementation feedback from ABL and we’d like to make a change in response to it. As background for this change, know that iterator helpers adds two methods that accept not just iterators, like most of the helpers, but also iterables. These are iterator.from and Iterator.prototype.flatMap. And Iterator.from and Iterator.prototype.flatMap differ slightly in what they do with one iterable in particular: strings. + +This was supported by the meeting. + +Consensus to not ToObject strings in Iterator.from. + +## Integer and Modulus Math + +Presenter: Patrick Soquet (PST) + +- [proposal](https://github.com/tc39/proposal-integer-and-modulus-math) +- [slides](https://drive.google.com/file/d/1_Fnqq8q47uHm7Um9dQD0Ti8zB5R0d0Hp/) + +PST: Thank you for your patience. So this proposal is adding a few static methods to the `Math` object. There are, like, two sides to it. The first one is toModulus operations, and few new integer math operations in the spirit of the existing i-modulo operation. I think Peter presented that to the committee three years ago, two years ago. And so this is just to update the proposal based on the feedback we received and the experience we have using the feature in excess on the microcontroller. And if it makes sense to the committee, we would like to prepare for Stage 2 later this year and making spec and doing that kind of thing. + +PST: So why do we want to propose that? The first thing is completeness. There are no true modulus operations provided by JavaScript. The modulo operator is in fact a remainder operation. And the other integer operations are common enough. I mean, everybody’s doing something like that. And are not directly expressible currently. The second reason is for the sake of performance: Integer math can be faster, and of course, that’s especially the case for us on embedded hardware without a floating point unit. Third reason, ergonomics: Using floating point operation is sometimes clumsy when integer operations are intended. Non-integer value can lead to unexpected results. I mean, the classical case is accessing an item in an array using multiplying the array length with Math.random. That -- I think it’s -- + +PST: So let’s go to details. `Math.mod(x,y)`, would return the true IEEE754 module. And here are a few more integer operations. `Math.idiv`, which would do an integer 32-bit division. `Math.imuldiv` which will do `x` times `y` with `z` with an intermediate. `imod` would do the same thing with `mod` but the same thing with integer 32, and `irem`, that will do the same thing as the modular operator but with Int32. All those operations follows the model of the existing math.imul, meaning that the input arguments are converted to integer values using ToInt32 and the results fit into Int32. The -- one of the -- I mean, maybe not using the logic that we use for that is that, like, math Imul div with x, y and is should return the same thing as Math.imul and math.imul with y and x should do the same as idiv so it remains consistent. + +PST: There’s a special case. I will not try to pronounce that number, be you know which one it is. Divided by minus 1 cannot be represented by Int32 because it requires 33 bits, and that impacts several of the math operations that we propose. Of course, especially idiv but also imuldiv and imod. So these are the results we propose. They seem to withstand the usage of the feature for y, but of course, it’s open to discussion. + +PST: The last one is irandom, which is returning integer 32 value. There are three variations depending on the number of arguments. We’ve heard arguments it’s from 0 to that number. And we have -- one argument is between 0 and the past value minus 1 inclusive. And with x and y, it’s between x and y -1 inclusive. The implementation matches the behavior of the example on MDN get random Int and of course like Math.random, it’s not intended to be mathematically secure. + +PST: There are alternate -- one of them have been suggested by Tab Atkins (TAB). The -- the most -- mostly the idea is that the X and Y could be between -- instead of between -- instead of between Int32, it could be between min safe integer and max safe integer. It’s more general, but it differs from Int32 precedent that was set by Math.imul and of course it increases the implementation complexity a bit. There has been discussion about this proposal being, like, putting all kind of things together that were unrelated. So we can, of course, divide it into many parts: modulus could be one part, integer math could be another part, irandom could be another path and we could put the different functions in the different paths. It’s -- that’s where the -- I mean, we don’t have, like, a strong opinion about that. I suppose that it’s a view for the committee that we are not talking about months of work to implement this. It’s not like a big proposal. So that’s why we tend to pack all of them into one proposal. And that’s it. + +PST: So, what we look is that the question for you is is the committee still interested by that? And should we proceed with preparing the proposal for Stage 2? And in which modality? Split into several ones, something removed and so on. Yeah, up to you. + +WH: Yeah, okay, it does seem like three separate proposals in one. I’m curious, for fixed precision, the unsigned versions tend to be more common and useful than the same precision signed ones. So why did you omit the unsigned division and remainder? + +PST: The question is, like, there would be, like, also unsigned operation? + +WH: Yes. For the 32-bit ones, the unsigned ones tend to be more useful. + +PST: You think -- you’re saying it would be more useful if they were using unsigned instead of signed? + +WH: Well, both have use cases. For `imul` it does not matter, signed and unsigned are exactly the same modulo 2^32. For division, it does matter. + +PST: I take notes and then I send them to PH and he will reply to you, because, I mean, it’s really not my proposal. But thank you for the feedback. + +DLM: So, yeah, we discussed this a little bit internally. and we’re more interested in the items that seem to be capabilities, so the true modulus and random seem interesting to us. We’re not as sure about the motivation for the other ones, so maybe some more -- some evidence and some of, like, performance gains or something like that would help convince us, but it might also be beneficial to split into three proposals like you’re talking about. + +PST: Okay. Thank you. + +SYG: So I would prefer it to be divided into three proposals, along the lines of modulus, which intuitively seems useful. Irandom, which to me intuitively seems usefeful, and I agree with DLM that I’m less convinced on the arithmetic methods and that should be explored in some proposal. + +PST: Okay. And when people are saying keeping the models, it’s both or just math mad? I mean in, the three, it’s the first one or the second one? + +SYG: What is `irem`? That’s integer remainder? + +PST: Yeah, that’s like the module operator, but for integer. + +SYG: Is there utility for that? I’m convinced by mod and imod. + +SFC: Just a comment that Euclidean division and remainder is another operation that’s found in certain standard libraries, including Rust (https://doc.rust-lang.org/std/primitive.i64.html#method.div_euclid). I’ve been using it for calendar operations. That would be useful to include if you’re adding other convenience operations. + +SYG: Yeah, I’ll skip the utility thing we already talked about. It was not clear to me when reading this what of the proposed integer arithmetic methods cannot already be exactly expressed semantics with the `|0` trick, like from ‘asm.js’ where you do floating point or you or zero everything. And if some of them are already… + +PST: I think for the other -- the other operation is mostly because of performance based on microcontroller, we felt floating point unit, so in fact, that allows the code path completely avoid to use some floating point library for all those operations. + +SYG: But you can -- but that should be possible with `|0` trick as well. Like, that was the point of asm.js, it could just compile it down to integer arithmetic if you wrote -- + +PST: I will put that to PHE. I agree with you myself, so I will follow up. + +SYG: And the follow-up to that is that of course `|0` gives a signed Int32, and to echo WH's point, that points to perhaps unsigned things are in fact more useful because you cannot express them today. + +PST: Okay. Thank you. + +WH: In response to your question about how to split the three proposals, the first variant on the slide is what I was suggesting, `Math.mod` is independently useful, so that would be one proposal. The second proposal would be all the things which are specifically limited to Int32 or Uint32. The third proposal would be the random number generators. And I wouldn’t limit the random number generators to Int32/Uint32. + +PST: Okay. + +SFC: Yeah, I was just wondering why I32 is being proposed for all these functions and would it be useful to have the 64 bit versions of these? Why 32? + +PST: Yeah. It would be. + +SFC: Okay. + +EAO: Mostly this is an observation that up until a week ago, the only issue in the proposal repo was a request to “Elaborate on the use cases for integer math operations” from 2020. And the thread of that issue doesn’t actually answer the question. So given that the slides came in late, we didn’t even really know whether this would be an announcement withdrawing the proposal or what’s happening here. I would ask that if this is proceeding as one or three proposals that these provide a much stronger justification of why do we need to do this thing, what are the questions it’s answering and the issues that are being solved here. Right now the motivation isn’t really there. + +PST: Understood. The idea was to get feedback, and we got it. So thank you. + +WH: I don’t understand the previous question about the Int64, since you cannot represent 64-bit integers exactly as Numbers. + +SFC: Yeah, -- other people sort of alluded to this, but it would be great to be more clear about which of these operations are being added here because they are actually more efficient to perform, is it actually more efficient than if you did the same operation in userland using the existing functionality that you can get from IEEE arithmetic, which we already support? And maybe some of these operations are actually faster if the engine can, for example, take a Number and make it into an I32, make some operation and get a back and give back to the user, maybe that’s faster than if you kept it in floating point and did everything in floating point. But it’s not clear, like, whether that’s the case, and I think that would be, you know, better motivation, especially for certain operations, if you can show that this operation is two or three times faster than if you tried to do the same operation the current way. I think that would be very helpful context to have. + +PFC: I’d like, if you proceed with the integer math part of the proposal, to explore during one of the early stages whether it’s possible to do the special case for `-(2**31)/-1` in a different way. Because I’m not a big fan of having a function silently return a result that’s not the arithmetically correct one. I think that makes sense for integer division in the CPU architectures, like you read in the readme, but I think my experience is JavaScript programmers generally don’t think at that level about integers. So I’d like us to explore if it could throw or do something else or return +2**31 as a regular number. + +WH: The answer to PFC’s question is that it does return the correct answer in that case, but it returns the answer `-(2**31)/-1 modulo 2**32`. Keep the in mind that all of these things already do modulo `2**32` on their arguments. + +PFC: Okay. + +CDA: There is nothing else in the queue. + +### Summary + +This was an update about `Math.imod` and other integer operations. The idea was to get feedback. and TC39 has received useful input how to proceed. + +### Conclusion + +TC39 will follow the advice received. + +## Promise.withResolvers + +Presenter: Peter Klecha (PKA) + +- [proposal](https://github.com/tc39/proposal-promise-with-resolvers) +- [slides](https://docs.google.com/presentation/d/1KFShqHVFhVBaqZ3anheUGOwtVDrPWCVeFvmaUpwk3AQ/) + +PKA: Okay, so hello. I’m Peter from Bloomberg and I’m presenting `Promise.withResolvers` for Stage 3. Yeah, so the motivation for this proposal is that, you know, we have this promise constructor which works well for many use cases. We pass it a callback, which takes resolve and reject methods as arguments, and then in the body of in callback, we’re specifying when and if these methods should be called, then the constructor obviously returns the promise in question. + +PKA: But sometimes developers want to create a promise and get a handle on it before deciding how or when to call its resolvers. So doing this requires doing this bit of boilerplate that we have in the first line where we create some outer variables `resolve` and `reject` and then inside the promise constructor just sweep those out so that we can get our handle and then proceed to call these in whatever context we want. + +PKA: This is a wheel that gets reinvented a lot. We found a bunch of examples where this is either a utility function or where it’s repeated again and again inline. The proposal is a simple one. It just says let’s add a static method to the Promise class, which does this for us, which returns plain objects with the promise as well as the resolve and reject functions as properties. + +PKA: The one sort of remaining open issue that was discussed last time was how subclassing and binding behavior should work. So what’s been resolved is that if we have a subclass, like, for example, Vow, and then we call `Vow.withResolvers`, that the promise property on that object that is returned by that method is in fact an instance of `Vow` and not `Promise`. And that is a related point -- a related point is that with other promise statics, if there isn’t a receiver when withResolvers is called, then we throw a type error. So that was sort of the behavior that was suggested at the last meeting. I think that received general support. It’s the way that other promise setters work, so nothing has suggested that we should move off of that, so affirming that behavior. + +PKA: We have -- I should also mention that, you know, I settled on the name “withResolvers”. I haven’t heard -- there was some discussion, and as I acknowledged in the issue on the GitHub and previously in previous meetings, it’s maybe not the best possible name or rather I should say maybe our -- some issues with it, it is a bit verbose, but I never heard a better alternative, so proceeding with `withResolvers`. We have a spec. There it is. We have a polyfill. There it is. We have tests. Here they are, or rather here’s the PR for them. So, yeah, I’d like to open the floor for questions and comments at this point. + +CDA: There’s nothing in the queue so far. Now, Shu? + +SYG: We discussed this internally. A question that came up was, you know, this -- some of this has been covered in previous meetings as well. So in the original design of promises, it was litigated that we would not have this form, and one of the reasons was uncaught rejections -- sorry, uncaught exceptions you want to automatically reject the thing, which this would not -- which this would not do. So I agree with the fact that you will need this kind of capability to be able to exfiltrate the resolvers somehow, and that use case is not going away and that it is not an illegitimate use case by any means. But I’m wondering, what is the downside in not having this as a standardized method? Given that it is fairly easily expressible in userland, are we saying as a committee we no longer buy the original motivation and we are litigating this? Are we no longer worried about the footgun, or are we just saying that, like, just because it’s used everywhere, we should add things that are used everywhere? + +PKA: Well, I guess in part, yes, that there’s proven out to be demand for this, and not adding the method just means we’re asking developers to continue to write this template and to do so in a way which, you know, also experiences the uncaught exception issue. So in part, yes, this is response to just a demand that appears to be there. Which we didn’t know about at the time of the original discussion. But also just I think that in general people are a little more comfortable with promises these days, that it wasn’t clear maybe how promises would be adopted or used in the ecosystem and what things would look like at this point, so, yeah, I suppose this is a -- + +DE: Yeah, I think there are two pieces to this. This one is yes, generally yes we should add things to the standard library that people have to implement over and over again. I think that should be a shared role of the committee, when it’s things at the JavaScript level. The other part is, how does this make sense given the previous design, which specifically avoided this, and I think it’s very legitimate to bring this up. I think we should consider this a decision based on experience that we are, you know, not making that particular design tradeoff that was previously made. When promises were first created, there was the expectation and I hope that people would feel comfortable using the promise constructor. This has proven to not be true -- well, somewhat. So use of this idiom -- well, at the very least, in the course of this proposal, when I’ve explained to people the original motivation and the fact that the exceptions are caught and turned into a rejection when it’s within the promise constructor and that’s why it’s excluded, most people have been surprised. And kind of baffled by the logic for exclusion in the first place. I think this has not received a large amount of kind of community buy-in, and that’s why it’s so common to bypass the whole mechanism in first place and use this particular idiom, so the path of excluding it hasn’t quite led to the thing that we want. The biggest risk that this whole thing was trying to avoid was that people would try to use functions that return promises, make them sometimes throw an exception eagerly and sometimes do a rejection. Where the hope was that everyone would use a rejection. I think the use of `async await` now, which wasn’t present when we made this original decision, not to include defer, has helped many people switch into this standard pattern of it’s just promise rejections. And, yeah, I’m kind of optimistic that the ecosystem will do the right thing with this, given that they generally already have been doing so. + +CDA: All right, Kevin is next. + +KG: On the promise rejection thing, basically what DE said, the thing where you catch a wrapped exception makes sense if you are returning a promise. But it only makes sense if you are only returning a promise. If you are doing anything else, you don’t want to catch exceptions in the constructor. Failing to schedule a task should be a synchronous error rather than an error for the person trying to consume the result of the task. So the promise constructor is fine for what it is, but this new function you use in cases that you’re not just returning the promise right away, and so it makes sense that in these cases, you actively don’t want to wrap up the exception. + +CDA: Nicolo is next. + +SYG: Sorry, I have to run, but can I respond to KG real quick. It’s not -- so it’s not impossible to express the use case, right? It’s about a standardizing the convenience. You are certainly able to get the resolvers out today I agree the use cases are going to continue to exist. You are not -- there’s some times you don’t want to catch the exception and turn it into a rejection. But we’re not talking about a new express. We’re talking about enshrining the convenience. And I’m somewhat convinced by what DE said, which is if -- I’d read a little bit into what DE said, he said `async`/`await` is the actual thing that helped people with the rejection footgun issue with `async`/`await` now in the language, promise constructor is probably an escape hatch anyway and it’s fine to no longer really try to build the right cowpaths into the API. Is that a fair characterization, DE? + +DE: I think so. I didn’t fully catch that. + +SYG: Okay. All right, I think that’s fine with me. Unfortunately, I have to run. + +NRO: Yes. So, like, when we first introduce promises, when I started using promises, I often needed the promise constructor because I was working with a lot of callback-based APIs and I needed to convert them to promises. And having this constructor that was, like, handy, because my callback code was pretty much set containing anyway and these adjust and move it inside the promise constructor function. Well, now, like, almost all the APIs are already promise-based, and the reason I now have to manually create promises not because I want to convert something that’s subcontained, but because my promise logic needs to be divided into different parts of my code. and so, like, while that was a minority case in the past, now the ecosystem evolved. For me personally, it’s the only use case for creating promises that I have now. + +EAO: I’d like to note appreciation for the long research into the history of the possible names for this, though it does look like at least “unsmoosh” was never considered. The resolution of the discussions on GitHub ending up “withResolvers” sounds like the right resolution, and we like this thing. + +CDA: Thanks. I would still prefer defer, which would be not compatible because some ES6 polyfills will aggressively delete `defer`, but deferred I think would be still possible. But I think it’s fine as is and still would support “embiggen” as well. + +JHD: (from queue) “+1 for Stage 3 with the current name (or “deferred”) and current subclassing semantics” + +CM: One thing I like about this proposal is that it feels to me like a better fit to the user’s mental model of promises. When I first started using promises it was in the 1990s way before JavaScript had them, and I’ve been using them ever since then for various things, and when I first encountered the JavaScript Promise API, my first reaction was like, are you people all on drugs or something? And I very much appreciate this proposal: it presents a better framework for explaining the API to people. + +CDA: That is everyone in the queue. + +PKA: Okay, then I’d like to ask for consensus for Stage 3. + +CDA: Consensus for Stage 3 presumably with the `Promise.withResolvers` name. + +DE: SYG previously expressed some concerns, but I think he concluded I think this is okay. Maybe we should confirm with him when he comes back that he’s okay with Stage 3. + +CDA: We have plus one on the queue from Mozilla, plus 1 from IBM as well. I believe we had a ++1 from JHD. Any other explicit support? Christian, plus 1 from Zolari. And I think we can note -- sorry, Dan, were you saying something? + +DE: I think we could say that this is conditional consensus, that we would confirm that SYG is okay with consensus before the end of the meeting. + +CDA: Sure. I think we can proceed -- yeah, I think we can proceed with Stage 3 on the condition that SYG would not block. We have also a plus 1 from MAH from Agoric. Any other comments? I’m hearing some mumblings from the room. +1 from CHU. Okay. + +RPR: No, I don’t think there’s more from the room. + +CDA: Okay. Then I believe with the caveat of SYG notwithstanding, you have Stage 3 for +`Promise.withResolvers`. + +PKA: Thank you. + +### Summary + +The Promise.withResolvers is a new proposed static method on the Promise class which produces a plain object with a `promise` key, which maps to a Promise, and `resolve` and `reject` keys which map to the corresponding resolving functions for the promise. Since this proposal advanced to stage 2, there were two developments: 1) the name `withResolvers` was confirmed (the champion acknowledged it may be an imperfect name but no superior alternative arose) and 2) the subclassing/binding behavior was affirmed as matching other Promise statics: when the receiver is a subclass of Promise, the method will produce instances of the subclass, and if there is no receiver `withResolvers` will throw. + +There was general agreement that the ecosystem has shown the motivation for this API and that ‘withResolvers’ is the best name to move forwards with. One question that was posed was what has changed since the last time this proposal was discussed (during ES6). The answer given is that 1) subsequent developments in JS usage since ES6 has shown the original design to be mistaken in its exception handling behavior, and 2) that developer desire for this API has remained strong despite emergence of more Promise-first APIs. + +### Conclusion + +The committee approved this proposal for Stage 3. + +## 2024 meeting planning + +??: All right. I don’t think we have enough time left to sneak in another item. Unless we have a nice little 10 minute something or other, which I’m not convinced we do. + +??: I could do a quick admin point. Which is that it’s almost time to start planning for next year, and the -- and the venues or we might go for in-person TC39 meetings over 2024. So if you would like to volunteer to host a meeting, we will be looking for venues in the US, continental US, European time zone and APAC. That’s how we’ve divided things so far, so one in each. So if you would like to -- if you have facilities or ability to host, please come and speak to me at any time. I will also be posting this on the reflector as a formal call for hosts. + +??: Do you have a draft of what meetings you want to be in different geographies? + +??: Not really. I don’t think there are any constraints at the moment other than we plan -- we continue to plan the same schedule as like there year, which is a total of six meetings, ideally once every other month. + +??: Odd months. + +??: And I think it’s wise to keep the in-person meetings four months apart, if possible. But this is all down to the constraints and the basically whether we get people offering, because when people offer to host, sometimes they can’t offer any top month of the year. It’s down to availability of particular rooms and so on. But, yeah, any more questions like that, you can speak to me or we can also discuss in the reflector. + +??: So one of the things we were mumbling about here is that not everybody’s name appears to be on the notes for today. So if you haven’t put your name in the notes at the header, please put your name, your abbreviation and your organization. + +??: Just one more item on the upcoming meetings. The one that is, I guess, probably landing in February would be the centennial, number 100. So it would be nice to get maybe that one to be in-person, if possible. + +??: Yes, we should do something special for the 100th, so if you have the best venue in the world, please volunteer it. + +??: It would naturally follow in January, but there’s a little bit of leeway. + +??: As I said, the main thing that limits us in hosting is who volunteers to host. That is something that, you know, not everyone has that ability. It’s -- it’s relatively rare. So that’s the reason for asking everyone here today. + +??: All right. I think we’re -- we have five minutes left. I don’t think we can sneak in data view methods within five minutes. Put JHD on the spot. Probably too short even for that one. + +??: Yeah, probably too short. + +??: Okay. All right, well we will give you a few minutes back extra, and we will see you all tomorrow. + +### Summary + +TC39 chairs have started the usual Meeting planning exercise for the the following year, 2024. It was noted that for in-person meetings a good plan is to hold them each four months apart. It was noted that the 100th TC39 meeting is upcoming in early 2024 (January, February). Therefore holding an in-person meeting and doing something special would be nice.In general: Venue volunteers and hosts are welcome. Please express possible interests to the TC39 Chairs. diff --git a/meetings/2023-07/july-12.md b/meetings/2023-07/july-12.md new file mode 100644 index 00000000..24f54013 --- /dev/null +++ b/meetings/2023-07/july-12.md @@ -0,0 +1,1479 @@ +# 12 July, 2023 Meeting Notes + +----- + +**Remote and in person attendees:** + +| Name | Abbreviation | Organization | +| ---------------------- | ------------ | ----------------- | +| Jesse Alama | JMN | Igalia | +| Bradford C. Smith | BSH | Google | +| Frank Yung-Fong Tang | FYT | Google | +| Waldemar Horwat | WH | Google | +| Michael Saboff | MLS | Apple | +| Samina Husain | SHN | ECMA | +| Istvan Sebestyen | IS | ECMA | +| Ashley Claymore | ACE | Bloomberg | +| Jonathan Kuperman | JKP | Bloomberg | +| Daniel Ehrenberg | DE | Bloomberg | +| Rob Palmer | RPR | Bloomberg | +| Daniel Minor | DLM | Mozilla | +| Martin Alvarez | MAE | Huawei | +| Kevin Gibbons | KG | F5 | +| Ben Allen | BAN | Igalia | +| Chip Morningstar | CM | Agoric | +| Nicolò Ribaudo | NRO | Igalia | +| Ujjwal Sharma | USA | Igalia | +| Philip Chimento | PFC | Igalia | +| Martin Alvarez-Espinar | MAE | Huawei | +| Luca Casonato | LCA | Deno | +| Peter Klecha | PKA | Bloomberg | +| Michael Ficarra | MF | F5 | +| Linus Groh | LGH | Invited Expert | +| Tom Kopp | TKP | Zalari | +| Shane Carr | SFC | Google | +| Eemeli Aro | EAO | Mozilla | +| Christian Ulbrich | CHU | Zalari | +| Sergey Rubanov | SRV | Invited Expert | +| Chris de Almeida | CDA | IBM | +| Peter Klecha | PKA | Bloomberg | +| Ron Buckton | RBN | Microsoft | +| Guy Bedford | GB | OpenJS Foundation | +| Justin Grant | JGT | Invited Expert | +| Mikhail Barash | MBH | Univ. of Bergen | + +## Conversation on note summaries + +MLS: So specifically, I want to talk about summaries. This was discussed at the beginning of the meeting yesterday. +.TXT we traditionally take near verbatim trips. We have done this, before we had transcription, we did this manually. But it provides all the details of the discussion. And each of us has the opportunity to go back and to make changes to those. I often do that to make sure that when I say um, ah or whatever, clarify or if I wanted to say it differently, that’s what I do. A summary is a little bit different. Most of the rest of the TC’s, include summaries for each of the topics discussed. It is just to start with, what was discussed. So discussing this proposal for Stage 3… And in that, slides are presented, whatever. And the link to the slides would be good. But the summary, it needs to include the salient points, the things where there was in depth discussion back and forth, and if during those salient points of the discussions, one method or one approach was chosen over another. That should be noted. And there should be a conclusion. What was the discussion reached during the discussion: “We agreed to move this to Stage 3. I will take the groupBy, pending review in this case, and that includes the next extension.” So any next steps should be include there as well. It doesn’t have to say who said what. It’s better if it doesn’t. But what was discussed, what – where was there tension, how was that resolved and what was the end result. Any questions on what w – coming up with a summary? + +DE: How long do you think the summary should be? + +MLS: It depends, maybe a few paragraphs at most. For really involved topics of disagreement or some discussion back and forth on several unrelated things, they would reach, you know, have some statements. It shouldn’t be as short, as know: “this was discussed, we agree to go to Stage 3.” You know, what issues were discussed or important for somebody to consider when they review that summary. + +DE: Are there some topics that not need summaries? + +MLS: Prior to this meeting, we reviewed the summaries in the minutes, and we agree that they were correct. You’re down to one sentence. Everything should have a single sentence, even a ECMA-404, 404 stable, no changes since the last meeting. Done. So thanks. + +MLS: Any other questions? Yeah? + +ACE: I love summaries. They are great. The thing that is hard is that there’s tension between captioning live and holding each other to account, but then people might think that this holds up the meeting versus putting the onus onto volunteers after the meeting chasing people up, which takes a lot of time, trying to message people, “have you done this yet?” And . . . it would be good to decide which one we are going to do and try to stick to it. If we are going to do it in the meeting and or not have agreement on this. Right now, there’s a tension and I don’t know how to solve it + +MLS: Daniel have called, have we captured that. And I think that’s a good time to do it. If we do it later, I think we are not going to remember what was discussed or we miss a point or to and we don’t want too that obviously. If it takes us 30 seconds or a minute to write it down and we have, you know, our note takers, which are assistants, and if at that time, the presenter and people that made comments go into the notes and write down what needs to be put there for a summary, and then we say, okay. We are done. We can move on to the next topic, yes, we can add one or two minutes to each topic. But it’s not onerous. It’s far better than we get off the call, remote only, but now we have to track people down. What did you mean? Did we capture? It’s unwieldily at that point. + +DE: Should the summary include the key points of the presentation? Or only the key points of the scent discussion? + +SHN: Yes. First, thank you for doing this. I mean, it’s not easy. You’re all doing a lot of work ask asking yet to do something else. Maybe one thought is, you know what you are going to present before you come here. And you may already have a little frame of some bullets of what you are going to present and fill in the results of the conversation, and cut and paste it in. So it may save a bit of time. To make it more effective during the meeting. Could I project something on the screen, please, if you don’t mind. Yesterday, I reviewed all of the notes. It was excellent at reading. 100 pages. The minutes are always good. The summaries, the ones there, are always good. I have put it there. I haven’t put names. We haven’t got summaries for some. Some of the summaries are short, as Michael said, one-liners about the document and it points to the slide. Perhaps that . . . there are other discussions that have taken place, and I am going to scroll down. Okay. Nothing for the Code of Conduct and the public calendar. Maybe a few words to add there. Perhaps for some of the other discussions, there were detailed discussion and maybe we can write a few bullets. Here were the key points discussed and it would be helpful the for summarizing the objective of what you’re discussing. List of points and the conclusion and next steps, similar to what Michael was saying. If you want to do it in bullet form, it’s okay. It doesn’t have to be paragraphs. It would be very helpful so that is my input and example. I did have a chance to go through notes. Some need more work. + +MLS: For normative things, there where pull requests, it was discussed, and we approved it. + +DE: So sorry. To understand, should the summary include parts of the presentation? You were talking about writing a summary of the presentation beforehand and copying and pasting it in. I think that’s a good idea. But also, we have heard people say specifically, that the summary should not include the presentation because that would be redundant + +SHN: I don’t mean – not the entire presentation, but if you present on a subject, you may already write the subject to be discussed, the key points to review, and then they will be a discussion. You know what it’s going to be. I have a frame to start and easier to do. Choose a method that is most convenient. + +DE: That sounds good to me. It should include, a summary of the presentation, a short one and the discussion . . . + +MLS: Not the whole presentation, but important things discussed. + +DE: For the people who presented on things that don’t have summaries, would you up for going to the notes of yesterday and writing those summaries? + +SHN: I just, again – you need to decide whether you want to do it today, during the meeting, or do it off-line and we will review it tomorrow. I leave that to you, based on your agenda and the discussions. And what would make most essential so we don’t lose track of what is going on. + +DE: Yeah. Currently summary writing is done maybe half the summaries are done by the same couple of people who are filling it in later, rather than the champions. The idea is to move it to the people who presented to write the summary. + +RPR: Yes. I think that’s always been the intent, is for the champions of the people who have normally verbally dictated a summary and it’s not definition of the task to be done, but it’s the incitement and encouragement of those people to, then, fill it out afterwards and that’s something that we’ve struggled to achieve in the past. + +SFC: On the surface this sounds great. But, this sort of synthesis of discussion down to like 3 bullet points is going to lose details. And I fear that it favors the loudest voice in the room. So like, it’s good as a supplement to the full transcript. It makes sense if like the proposal champion writes, here is the main take away. That’s fine. But it should be a supplement and not a replacement. + +RPR: And EAO? + +EAO: Is it a shared responsibility or a dedicated person in each discussion that summarizes this way? + +SHN: Many of the other TC meetings in Ecma are much smaller than the group that we have here and the discussions may not be as lengthy. And there is one person that does write it. + +MLS: There’s typically a secretary. + +SHN: A secretary. They are there in the meeting writing and doing it together with the team. Imagine that the other standards groups are maybe 5 or 10 people. Much easier to manage. + +EAO: Are the secretaries provided by Ecma – + +SHN: Either me or my colleague. + +EAO: Okay. + +DE: Totally agree with SFC: We need transcripts in addition to the summaries. We should not consider 50 or 100 words to be a hard limit. That was an example. In particular – it’s important to capture this thing about favoring the loudest people in the room, if there’s subtleties to capture, that’s fine to include it, even if it makes it exceed 100 words. + +RPR: So it sounds like SHN, you have your list of people being requested to handle that. And then I guess for the rest of this meeting, we can try to spend 2 or 3 minutes projecting up the notes at the end to write the summary live, we have done that in the last couple of meetings and it seemed to have worked quite well. All right. + +MF: Along the same line as SFC, I feel that trying to summarize the presentation content instead of the discussion is just going to lose nuance. Many people will come to rely on only the point that is in the summary, not the actual full discussion, and I just don’t see much value there. I think that capturing the discussion summary is fine. But I really would prefer not to try to like summarize presentation content. Also, I think that for many of the meta topics, like the editor update and all of the things at the beginning of the agenda, non-proposal and non-discussion items, I think that they pretty much stand on their own in that I don’t really see the value for this. Somebody said yesterday, why have a summary of a summary? The entire content is the summary. That’s how I am feeling about this. It’s not a thing that I won’t go along with, but not a thing I prefer. + +DE: Thanks for raising this, MF. You had raised it yesterday. It’s important to come to a shared conclusion on this as a committee, whether or not we agree to use summaries for these cases. I think these sorts of topics make sense to write a 2 or 3 or 4-bullet point summary on. They are summarized that describe things that happen. So I don’t understand the point you’re making. + +MF: If I look back to just the summary written yesterday, for like the 262 update, it was like the committee approved or is happy about the editor update. It’s something that didn’t need to be said and was applying a judgment to the update. And there’s no need for us to come to any judgment on it. Content was consumed, the end. + +DE: Yes, the summary I wrote for your presentation was not good. It’s great if we could write a better one together. + +MLS: In reply, I look in the transcriptions to find something and sometimes we discuss things on one day and come back another day. Summaries make it very helpful for me to find where we talked about that. So I find the summary and I go look in the transcription. That’s just me. + +RPR: And it sounds like SHN is doing the detail of each point. That would be something that if SHN and yourself could talk about, to see if a summary is appropriate. Thank you. All right. Good stuff. Thank you for raising this. This will help improve quality of notes. + +### Summary and conclusion + +The discussion summary and conclusions needs to include the salient points, agreements reached and next steps. +The summary should not be as short as: “this was discussed, we agree to go to Stage 3.” It should highlight the most essential items so not to lose track of what is ongoing. +The summary should indicate the issues discussed and important points and provide clear indication of what to consider when anyone reviews the summary in future. + +## Stage 3 update of Intl Locale Info API + +Presenter: Frank Yung-Gong Tang (FYT) + +- [slides](https://docs.google.com/presentation/d/1mJS1ZHnUr66nq9P4HZUrGzaujVS1nI_Rmpf7SoPIiso/) +- [proposal](https://github.com/tc39/proposal-intl-locale-info) + +FYT: Thank you. Hi, everybody. Thank you for coming for today’s presentation. Sorry I cannot be there in Norway. I am going to talk about an update about the stage 3 proposal locale info app and this is not for any other stage of this, but give you a update about what API currently is and there’s I think one of the normative PR consensus to agree upon this. The motivation for this proposal is that we try to expose locale information, such as week data. And our cycle and on and so forth using the locale. The history of this proposal is that we have advanced to Stage 1 in 2020, in September, and in January 2021 to Stage 3, in April 2021, and have come here and it gave some of the update. + +FYT: A couple things changed after the Stage 3, one in December 2021, we have changed the order in the list. And also, I think more importantly, that we figured out that we have to change the getter to a function because a lot of those functionality or expose API as a returning object. And we decided to change to a function instead of getter after Stage 3, which is unfortunate that happened in the Stage 3 period of time. Not only that, we figured out that the – those functions should have a get prefix. So one new issue that I want to bring your attention that actually is related to some effort with the Ecma 402 other part, which is we try to look at across to see well, is there anything that should – in the locale extension, impacting the functionality and we figured out one thing that actually should be considered is that Intl.Locale currently, we do not take the fw value key words, the fw keyword and type was defined in UTC #35 to override where provide a presence for the first day of the week. In all other preexisting Ecma API, that have no impact to any functionality. In this proposal, because we return a week data, it should be able to impact the return value of the week – the week data. Whenever you say there is `fw`, on Tuesday, the first day of the week, it should be Tuesday. Or Sunday. Or Monday. So that is the background. And the thing is defined in the UTS #35 in the consultation. And actually, it was originally introduced into UTS #35 in the revision 41 in 2015. Which is about eight years ago. + +FYT: So in order to reflect that, it means that we have this PR. What is it that it means? It means for Intl locale, we need to add an `fw` to the RelevantExtensionKeys internal slot. Currently, have some other key for that like `tz` – not `tz`. `ca` for calendar, et cetera. But `fw` was not there. It needed to be added. Internals will have to have a first day of week to remember that. In other Ecma APIs we always, whenever we add that, we also allow that value to be read from option bag. That means we should also read the first day of week from the option bag which have the value could be “mon”, “tue”, “wed”, “thu”, “fri”, “sat”, and “sun”. Those are in the our choice. Those synchronized with the Unicode locale extension. It’s defined in UTS #35. It has to be the same. But also, because we supply this information, we should also be able to – by the way, that option will be default to `undefined`. People don’t define that, they will be undefined. We have to add a getter to first day of week to return the value, if the locale extension was provided or the option back have that Valerie of after all the thing adds, get week info, the first week – sorry, I think the first day, I type it wrong here. It should be the get week info first day, the valued will be based on the internal slot first day of the week. So this is a required change in the PR for [PR #70](https://github.com/tc39/proposal-intl-locale-info/pull/70). Thank you, Andres from Mozilla to raise this issue. We have this PR that have been discussed in the TG2 about that. + +FYT: There are also other thing discussed related to this, and in TG2. But later on, we will decide, well, we shouldn’t made that [chismt] the issue is is that because now we are adding `fw` to this, so there are currently have 3 different way in ECMA262 42 to represent the idea of day of week. In the Date API, the Mondays represented by 1. And the Sundays represented by 0. In Temporal, the number 1 represents Monday still. But Sunday is 7. But also, as I just mentioned to you, Intl.Locale, W string you have to take the 3-letter words for that. So it becomes 3 different representations. So what we have discussed, in the TG2 is that, well, in the get week info, the first day, and the weekend, the information, should that change 1 to 7 to the 3-letter form. And through discussion, we should keep this and not changing it. So it will be better to operate with the Temporal date of week proposal. That is discussed and TG2 other people think that shouldn’t be the move. It’s not in this PR that have been discussed. There are several remaining issues and I am going to tell you that this detail today because this hasn’t been fully discussed doing TG2. But next time I will bring it here. One is how well the CA, the calendar extension impacting the `getWeekInfo` currently the language in the proposal is very vague. It just says it will be considered. Right? But there are issues, I think, about the order, whether the CA will have more important order than an other extension called `rg`. Et cetera. And I have to send, ask CLDR and Peter and Mark from Google about that. I am waiting for their reply. Instead of writing our own spec, we can maybe just follow their specification, if there is something, maybe we need to repeat it or refer to that. But I think we need to have a conversation with them. The other issue have been discussed, haven’t been, but will be currently the get TimeZone, if the region tags basically return – I think we return nothing, but there are discussions for Andres from Mozilla to apply to max that. The other consideration is the getTimeZone, the naming for that could have some forward-capability issue. And if anyone have interest about these three topics, please join the discussion issue and/or a TG2 call. And we tried to have some at least some common understanding before bringing to the plenary next time. + +FYT: The main focus is really requesting from committee to have consensus of [PR #70](https://github.com/tc39/proposal-intl-locale-info/pull/70), which is adding the `fw` or the first day of week, the scope of – this list of five thing. So here. Sorry. The thing, this should be first day. I’m sorry. I have a typo here. So that’s my request. And also, like to tell you that I hope that Firefox can have a commitment to ship the thing, and also my goal is that I am hope to go targeting to Stage 4 in November. But this is just basically what we asking. Any questions? + +RPR: There is one item in the queue from EAO. + +EAO: Hi. FYT. Sorry, I missed the last TG2 meeting where this was discussed. I am confused whether or to the proposal is to add a “mon”/”tue” string identifier for weekdays as a new thing to JavaScript outside of the `fw` subtag or is this proposal continuing to support our current practice of using numbers for weekdays? + +FYT: So the thing, the answer is this . . . right. First of all, if you have locale identifier, you have less say `en-u-fw`. It has to have the 3-letter code. + +EAO: That’s clear and fine. + +FYT: The second thing, because of that, option, we read option – you can have option to have first day of week and the value have to match that. So you can have – passing object, the object first day of week, then a string. Of the 7 value. Right? And because that, then whenever you have a get and to read that information, a getter for the Intl locale, first day of week it has to return whatever you passed in. The principle in ECMA-402 is whatever you put in the option should be the same thing you return from the getter. Right? So basically, that 3-thing would be in the three-letter code. But we are not going to change is the get info, the return get info. That stayed with 1 to 7. + +EAO: Can you clarify? + +FYT: But the value of that, well, could be changing from – instead of 1, it could be 2. Tuesday. The first day should be 2, instead of – it won’t – will not be “tue”, but it will not be 1, which means Monday. If you have fw-tue,, that means or if you have the first day of week, “tue”, then the get info first day will be 2, not 1. Because override it. + +EAO: Could you clarify why do we need the string “first day of week” option? + +FYT: Right. So the reason is that in UTS #35, it has that defined as in the locale. Right? So in the locale specification, there’s a way to transfer that. So there are regions where their culture may have some way to say, the first day of the week is Friday because, for example, in some Muslim or Jewish country, maybe the city government or some business, their locale is their starter first day of week not in Monday or Sunday. + +EAO: I don’t think . . . can I clarify my question; because I understand that this definition does exist. And for parsing the `u-fw` subtag, I understand why we need to support the string identifiers. What I do not understand is why when we parse this and represent it as an option value we cannot continue to use the integer representation of these weekdays that we are already using in different places. Why do those need to be strings rather than the numbers 1 through 7? + +FYT: So you’re saying, in the option bag, first day of week, why it cannot be 1 to 7. Is that your question? + +EAO: Yes + +FYT: The reason, that value should match – to match with the Unicode extension. That’s currently how we do it. For all other options. Unless we want to break the precedence. For every other thing, we have locale, for example, `hc`, instead of return – using `12`, for the option bag, we use `'H12’`. Right? So there are also synchronized. The option bag is just yet another way to passing that information + +EAO: I think what is happening here is that we are breaking precedent either in one way or a different way. + +FYT: What precedent are you talking about? + +EAO: Declaring days of week. Currently we have existing numbers that we are using, and numbers that we include in Temporal, and here we are doing something different. + +SFC: So there’s two things going on here: so one is there is the Unicode locale extension, which is defined by UTS #35 to have the set of strings, which are not novel to the ECMAScript standard. It’s the case that when everywhere else in Int.Locale where we consume the extension key words we reflect the strings in UTS #35. So by using the integers in this case, we are breaking precedent in the sense at that the strings that come from the unicode are no longer the values in the getters and setters. + +SFC: However, this is also the first case where there is an unicode extension keyword that is already sort of has a definition in the rest of the ECMAScript 262 and 402 specification. Right? So, for example, with hour cycle, those sort of originated in the unicode key words and then we sort of took those strings and those strings are used where they need to be used in ECMAScript. In this case, Temporal and JavaScript date, like, have already defined the integers values. There’s one other case that is sort of like could be considered something we could look at and that’s time zones, where currently we don’t support the `tz` extension on unicode locales. But if we were to do that, those would also have a different syntax because the TimeZone identifier is coming are not the ones at that are in like Temporal TimeZone. We don’t currently do this. But if we were to do this, then this is also a suggestion where this discussion could be had. + +SFC: So we discussed this in TG2 at length last week or two weeks ago and we decided that, like, we wanted – that this is the type of discussion we should escalate to TG1. We came in the initial recommendation that, like, the getters and setters that are specific to the unicode locale extension, should volume the unicode locale extension and not specific to those should use ECMAScript standard to the 1 to 7 numbers. + +SFC: But you know, yeah. This is something that would be good to get feedback on from more people than the two of us over here in the corner. + +DE: Having not thought about this for very long, I like the idea of using numbers because of its correspondence with Temporal. And I think it will be important to, for example, accept the Unicode names as inputs in the options bag. + +DE: But if we’re in resolvedOptions(), I think – or in, you know, LocaleInfo, I think it makes sense for these to be numbers. + +FYT: DE, that’s not the question. The question is whether we use that when the constructor reads the option bag. + +DE: Oh, if we are talking about the constructor reading from the options bag, we want to just accept both. I thought we were also talking about getting it from LocaleInfo. If we are reading it, it’s clear we should accept both. + +FYT: So should we also read – support zero for Sunday? + +DE: Why not? I don’t know. There’s no ambiguity that would create. + +EAO: Are we considering locale subtags like `u-fw` as only an input that we parse things out of, or is this something we are considering need to also be able to output? + +USA:I think both of them because you can construct a locale using the constructor, and then serialize it to parse it around. + +FYT: Right + +USA: It would be string. + +FYT: Right. So the issue is that this get – what the get first day week getter will return? Right? That’s the output. + +USA: I think – I understand DE’s concern line, for something like IDML?, all about locale data, it’s kind of unfortunate that we use these strings that have the sort of English names of the read day . . . because something needs to interface with other systems that also implement this, the same way, I think this discussion of, you know, if we should replace the strings with numbers, should ideally happen in unicode because everything else would still stick to what is specified in unicode. Also, things like if Sunday or Monday should be 0. Like, there’s no clear answer to that. Right? Because it might assume something different based on your context. + +RPR: We only have 6 minutes remaining on this topic, Frank. Carry on with the queue. It’s SFC.. + +SFC: Yeah. I wanted to clarify, if FYT you can go to – yeah, this is the right slide. So on – you know, there is input and I couldn’t tell put. So, like, on input, so the only place where these strings are returned on output is the Intl locale.prototype.first day of week getter, which is the fourth bullet point on this slide. So the first day of week getter. That’s the only place where the strings are proposed to be returned from this API and that’s because they correspond to – this is a getter for exactly the unicode extension keyword. This is not the week info. The line this bullet point 5 is the week info and that will return the integer values. As proposed, that’s my understanding of what is being proposed here. So . . . I want to clarify that, like, I don’t think anyone is proposing that we use the strings in the part of the API that we expect use the interface with Temporal. Like that’s definitely something that that we value, that we discussed at the TG2 call. But the question was like the get – is specifically extremely narrow. It’s been a very narrow question and that is, for the individual getter and setter on Intl.Locale locale subtext do we use the numbers or the strings? So it’s a very narrow question. And FYT is proposing we keep 1 to 7 for all the stuff there. This is the new getter and setter for API 47 + +EAO: I am not going to block any of this. I am just really quite uncomfortable with a third definition of weekdays. We should be able to avoid this. It’s unfortunate if we can’t. + +FYT: The third definition is avoidable. Because the locale is already defined by UTS #35. It’s not something we can avoid. + +RPR: We had a +1 for EAO take. + +DE: The goal of our APIs in Intl should be to expose Unicode algorithms in a way that makes sense from a JavaScript perspective. If JavaScript defines weekdays as 1 through 7 in Temporal, that’s the thing we should use for output and accepted as input, in addition to the Unicode values, which I agree is necessary to interface with. + +RPR: All right. FYT, I have 2 minutes remaining. + +FYT: So I want to ask for consensus with that or do you think we should wait for more discussion? Anyone want to block this PR? + +DE: Let’s have more discussion. If you’re not going for Stage 4 or anything today, there’s still things that need to be discussed, let’s continue this in TG2. + +SFC: I want to say that we already spent time discussing this in TG2 and we wanted to go to committee to get a recommendation, if the committee to give recommendation. I don’t think we will get any further if we keep discussing this. It sounds on, based on the 3 people speaking, 2 people weren’t in the TG call, DE and EAO on the two corners we should use the numbers. Let’s use the number for all the output and accept the strings as input. And then move on. + +RPR: Thumbs up in the room from MLS. + +DE: I would be happy with that, but we don’t have a PR to agree consensus on. + +RPR: The question is to FYT, are you in principle happy with that conclusion? + +FYT: I do think I need to think about that. I do think there may be consequence that could cause problems. I mean, this first time I heard people suggest that. I don’t feel comfortable to agree upon that. + +RPR: And USA has a slight concern. + +USA: Yeah. I just wanted to note on the topic of using the numbers for output, that might harm interoperability since I suppose the output would be used for that as well. So yeah. We should discuss this further and also with people from unicode. + +DE: So was this idea not raised in the TG2 call? + +FYT: It was not. + +DE: Okay, So I would like the next step, be a further discussion, even as it would be great if we just conclude here. + +FYT: So we can – so my understanding, we cannot reach conclusion for that one and bring the consideration for using number as output from the – and input also to the option bag in TG2. Is that okay? + +RPR: That sounds good. Okay. + +RPR: Okay. Thank you. + +FYT: Thank you. + +RPR: To one of the note takers project the notes, so we have captured the summary. Frank, if you would like to project the notes . . . + +FYT: I don’t know how to do that. Let me see + +### Summary and conclusion + +Consensus was not reached for the proposal Intl Locale Info API, [PR#70](https://github.com/tc39/proposal-intl-locale-info/pull/70). This item needs to be addressed TG2 for discussion, if taking/using the number 1 to 7 as the input, the option bag, and also as a return value for the first day of week. + +Summary: +The key arguments is that: +Temporal already have defined 1 to 7, away from the dates, so we should not have a third way to do the weekday representation. + +Currently in Stage 3, already for a while, including the current issue there are three more issues and will be discussed, Issue 30, issue 71. Issue 73, in the coming TG2 and hopefully reach Stage 4 by November. Noting both Chrome and Safari have shipped this version, but not Mozilla. + +## Stage 3 update for Intl.DurationFormat + +Presenter: Ujjwal Sharma (USA) + +- [proposal](https://github.com/tc39/proposal-intl-duration-format/) +- [slides](https://cloud.igalia.com/s/gsytCTdNg9o2WNg) + +USA: All right. Hello, everyone, again. I would be giving quick update and won’t take a lot of your time and present two normative PRs we have come up with. If any of you remember, I mentioned during the last meeting while presenting normative PRs that it’s going to be the last round of normative PRs. If any of you are going to call me out on that, I can explain. + +### [PR#150](https://github.com/tc39/proposal-intl-duration-format/pull/150) Normative: Revert to previous behaviour by setting fallback value for fractionalDigits to *undefined* + +USA: The first one is actually reverting the PR that we got consensus on last time. So the problem is that while trying to get the whole fractional digits to work, we – and discussing further in TG2, we realized problem with going with a specific default like we were is still that having fractionalDigits as a set value would lead to the resolved options, resolving to something that would not be ideal. + +USA: So actually, it is a better solution to keep the value undefined. Now, it’s – it’s a new something. Like we have never, like, resolved something to undefined. We usually always have some default. But in this case, we have strong reasons to believe that keeping the resolved value of fractionaDigits undefined is good because the undefined behavior would mean that unless you set a set number of fractional digits, what you will see is exactly the size of duration. It would not clip the value in any way, which is the preferred default behavior, but was hard to achieve. So this is essentially reverting back to the old behavior. And realizing exactly how to deal with it. So there’s an example. + +### [PR#158](https://github.com/tc39/proposal-intl-duration-format/pull/158) normative: align ToIntegerIfIntegral with Temporal + +USA: And the next one is corresponding to a normative change that happened in Temporal during the last meeting. So there’s a funny – the like the name is called ToIntegerIfIntegral. It takes a number and if it said integer, it makes – it’s fun. But it’s one of the abstract operations that are from Temporal that used for processing durations because, of course, durations need to be integral within the subunits and this was changed during the last meeting in Temporal. But we need to backport the change. We need a rubber stamp on that. These are the two changes: DurationFormat is looking in good shape. I believe we are very close to calling it a day. But I would like to hear your thoughts abouts these two changes. Let’s see if we can get consensus. + +### Queue + +SFC: Yeah. You said that the duration – is in good shape. I wanted to clarify what the status is of the 15 open issues that are still in the repository. There are five new ones opened by Andre Bargull about a month ago and I wanted to know, are any result in normative changes, are all editorial or part of v2 – it’s good to be more clear on what the status is of all the issues. + +USA: I can actually – yeah. Open the tracker right now. ABL has been helpful with the feedback and the two – one of – at least one of the two changes today, where in direct response to issues that they raised. + +So, yeah. Most issues, however, are about improving either editorially certain things, or yeah. Like, sort of big implications like, if Temporal duration imposes certain limits would that have an impact? So, yes, like, there is a few issues that are open. But I do not believe that, you know, any of them would trigger something very existential. For example, like, the duration limits, I am not sure if we can act on this right now. But yeah. I do acknowledge that there’s a couple of issues open, so I could actually go down and close all the issues that don’t really have an effect on the future definition of the proposal. + +SFC: I would be clear on, like, you know, like I think issues are fit in 3 categories: these are normative changes to discuss and one these editorial changes to do on in conjunction with stage 4 and are feature requests that are out of scope. It’s clear to delineate which issues are in which category, and if there are part of the discussion, we should escalate those in the TG2. + +USA: There are labels but I could triage so that’s it clearer, what’s the case. There’s three issues that are tagged normative: two of them are being presented right now. And another is a duplicate. So I do believe at the moment, that this hopefully going to be the last round of normative changes. But yeah. I could clean up the rest of the issues to make sure it’s the case + +SFC: It’s good to make sure all the issues are labeled and triaged because yeah. It appears like some of the issues are labeled and triaged and others are not. It’s good to make sure we have covered all the bases and there’s nothing we are missing. + +DE: Yeah. I like SFC’s idea. And maybe we should think about doing this sort of review of issues for Stage 3 proposals in general. USA, you mentioned something specific which is that you couldn’t decide yet what to do about potential limits on durations that Temporal might have. In this meeting, we are going to hear the Temporal’s proposed conclusions on that. Right? + +PFC: Yes. + +So could you – Do you have a proposal prepared for if the committee accepts the proposed duration limits? What the implications would be for duration format? + +USA: I could put a PR for either case. I suppose. But yeah, I had been meaning to do that after this meeting. + +DE: Great. That’s one further anticipated change. Once you do the triage of the issues, it’s good to get back to us with the anticipated changes. And yeah. This is exactly the kind of thing that would come out of the thing that SFC is asking to. It would be a good thing to include in Stage 3 presentations. + +USA: Right. Thank you. + +PFC: My understanding is that DurationFormat doesn’t do any arithmetic on the quantities. I don’t expect there will be much change needed as a result of the limits on the Temporal quantities, if any at all. + +USA: I agree. It’s just like a sort of specific issue that was opened to sort of discuss the implications of whatever ends up being the final change. But certainly, it doesn’t have to be a big sort of change if how duration format works. + +DE: I am really confused. Is there a normative PR or not? + +USA: It’s unclear. Like it’s only a discussion. It’s only an open issue to discuss the implications of that change. + +DE: Okay, You haven’t worked it out yet. Okay. That’s fine. Good to understand that. + +USA: It’s just to like track if the duration format changes like what’s the conclusion and then to make sure that we discuss this within the context of duration format, but it doesn’t need to have a . . . all right. That’s the queue. I suppose. Thanks for your feedback. + +USA: Does anybody object to the changes I proposed today? + +RPR: There are no objections. Visible in TCQ or in the room. So I think you have consensus. + +USA: Perfect. Thank you. + +RPR: All right. So ACE notes that we are missing a link to the slides in the agenda. + +### Summary + +- Two normative changes ([PR#150](https://github.com/tc39/proposal-intl-duration-format/pull/150), [PR#158](https://github.com/tc39/proposal-intl-duration-format/pull/158)) achieved consensus + - #150: Revert previously approved normative change to how fractionalDigits works in DF. + - #158: Backport a change to the ToIntegerIfIntegral proposal in Temporal. +- Still need to spent time reasoning about +- Open issues need to be triaged and labeled + +### Conclusion + +The two normative changes that were presented achieved consensus and we need to spend some more time talking about the implications of the conclusion that we reached regarding the limits that end up being imposed on Temporal duration. + +## Base64 + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/proposal-arraybuffer-base64) +- [slides](https://docs.google.com/presentation/d/1ng6v9I6-jJSUPB-YNxjnHYFDzaL136lb7FpTKRxHhNM/) + +KG: So this is just an update on the Base64 proposal. The name of it still says ArrayBuffer, but it’s not ArrayBuffers anymore. The proposal is currently at Stage 2. At the last meeting, there was some feedback and asked for it to support, in particular, writing to an existing ArrayBuffer. And we have been talking about how to do that on the issue. And we have a rough shape on how to do that and some open questions. So I want to get the committee’s feedback on all of that and hopefully confirm that we are happy with the design as it is here. And then I can come back for Stage 3 later, after writing up the spec text. Right now, there’s no spec text for this because I didn’t want to do all of the work to specify something without first getting the committee’s approval for the design. + +KG: Recap of the proposal: there are first these one-shot methods, where there's a static method on Uint8Array and gives you a string by Base64. There’s a prototype method that gives you a Base64 encoded string. And there's hex versions too. Nothing has changed. + +KG: In addition, there will be these streaming methods which require managing a little bit of extra state. So they’re not annoying to use. But not very complicated to use. Where the idea is that you pass in these extra parts into subsequent calls to the streaming methods. And then do this repeated until you are done. And this gives you chunks of, in of case of Base64, chunks from your strings. And in the case of it gives you Base64 encoded string chunks. Uint8arrays. Right now there’s no hex versions - or in the version of the proposal I presented at the previous meeting, there’s not hex versions of these because there is no obvious reason to. Base64 is much more complicated than hex because with Base64 you have these – this like thing where you get 3 bytes of input correspond to 4 encoded characters and so you can have state where you have one byte and you need to seek 2 more before you can emit the Base64. It doesn’t come up with hex because the correspondence between input bytes and output characters is more direct. + +KG: There’s no changes to the one-shot methods at this time. And 99% of users will end up using that. But we need to work out the more powerful versions. So concretely, if you are using fromPartialBase64 in the version of the proposal, I presented previously, this is what it would actually look like in your code. Iterate over your chunks of input, and call from partial basic 64 repeatedly. You would have to pass this true argument on every initial call, and then more false on the last one to generate padding. + +KG: With the updated design, the way that we are managing state has changed. So that instead of keeping characters, so a substring of the input, instead we keep this pair that keeps track of individual bits from the input. This is nicer because it allows you to output more bytes and keep less stuff in the state. + +KG: You can always completely consume all of the input as long as your input is correctly padded or is the final chunk and there’s no bits coming in and subsequent chunks. So you don’t need a final call to say, oh, there’s no more input because you will have always consumed all of the input that is possible to call. + +KG: And this does mean that you don’t get enforcement of correct padding automatically. But it’s trivial to check if you want to. I should mention most Base64 decoders don’t actually check that the padding is correct. So it’s up to the user to do that themselves, if they want to, with this design, which is the final one here. + +KG: And then there’s no change to the `toPartialBase64`. You still need to pass this parameter with `toPartialBase64` no matter what design we go with as long as we can’t individual chunks to be correctly padded, which is like a property to preserve. On the final chunk, if you had finalizer,, only two bytes of input, you need to pass `more: false` so that it knows to generate padding characters. Rather than keeping extra state assuming that you will call it again. + +KG: The main change since last time is support for writing into an existing buffer. I am going to show you one thing that that could look like. There will be some open questions later. So don’t get too hung up on the details but the broad strokes are here. So basically, and I realize this is a lot of code to be throwing at you. The idea is you have this partial Base64 into method this takes an output and input parameter, and rather than – as previously giving you the result, as one of the things that it gives you back, it gives you read and written, which counts of characters and input rather than bytes . . . obviously, and it is up to you to call substring and subarray on the input and output respectfully, and consume the output buffer once it’s full, and then you have consumed all the input, it’s up to you get a sub array of the output array for a number of bytes that it ends up containing. + +KG: None of this is very difficult. This is like the complete code that you would use in real life, except not with `console.log()`. You would do whatever it is you’re doing with each chunk as it comes in. This is broadly the design I am asking for the commitee’s feedback on. The read and written and the general shape of it One question that there’s debate about is offset parameters. Basically, I mentioned the user needed to substring and subarray themselves, a possible design would instead be to take an input offset and output offset from that address the part of the buffer that user updates, to not create these additional views [TypedArrays] of input and output themselves, which may be important. I know we are had feedback from Jarred Sumner who is working with JavaScriptCore and Bun, saying that subarray is expensive because it involves making a copy of the backing buffer the first time you do it because most buffers don’t have more than one view [so they can be allocated inline, whereas ArrayBuffers with multiple TypedArrays pointing to them need to be allocated out of line]. So there’s a slow path for making a second view. So by having explicit offset parameters, we allow not creating these views. On the other hand, it’s probably not that bad. And it makes the API more complicated. Conversely, it’s not like you are forced to use the input and output offset parameters, and the code that I have on the previous slide would work exactly the same even in a world which they existed, you could not simply pass them. I would like to get the committee’s feedback on that. + +KG: The pros and cons, as I mentioned, my preference is to not have these parameters. Keep the API simpler. Take the performance of creating subarrays. It’s probably not a big deal and if it is we can expand the API later. + +KG: Next question is whether to have a separate method. So as I presented earlier, it has this additional method with an additional parameter. But of course we could instead have an extra argument to the options bag that would specify where to write the data. I think the writing to existing buffer case is pretty different. There’s also precedent on the web platform with separate and code versus code on text encoder. So I think it makes sense to have a separate method rather than just an additional options bag parameter. It does mean there’s more methods. So there’s also a question of do we want to have support for doing this for hex? I mentioned, there’s not in the previous version of the proposal, there was no reason to have streaming support for text, if you’re writing into an existing buffer, you do need streaming support for text. Maybe it’s useful? The main down side is additional methods. I guess, we should do this for hex, just for consistency-sake. It’s not like it’s that much extra work to specify or implement. And it probably is useful in some cases. And then last thing is just the names for these things. So `fromPartialBase64` is a gross name. Is `fromChunkedBase64` better? We need an actual name before Stage 3. The “chunk” names are what I am considering the both candidate, but maybe there’s some more reasonable thing here. So if we make all of these changes, this is what would be the proposal. We would have simple versions, the one-shot versions and then we would have from chunk and to chunked methods. And from chunked into methods. Of course there’s no `toChunkedInto` methods because we don’t have a string buffer type in JavaScript. I should note that I am not showing the full API here because of the Base64 methods take an additional alphabet parameter in the options bag whether to use the [SEB] safe variant of that or not Yeah. Let’s go to the queue. That was all I had. + +RPR: Waldemar is on the queue. + +WH: What is the type of `extra`? One of the slides shows `toPartialBase64` called on a chunk to produce an `extra` and then `toPartialBase64` called again on the `extra`. On the previous slide `extra` was a pair of things. Are the types of `extra` here different? + +KG: Yes. The types of `extra` are different. In `toPartialBase64` case, it’s an Uint8Array because it’s bytes of input. And so you can do this final call. The – this kind had has to fall out of Base64 being a prototype method in that you need some way of evoking the prototype method with the last piece of input. But that’s not relevant to fromPartialBase64. Because it’s a static method. And also, because you don’t have a last piece of input. You just read all of the bytes. Yes, the types are different. The user doesn’t care because they just are round tripping this back and forth. + +WH: Yeah. I just find this to be just a bit of a point of confusion. I wonder if we should call these different names. But I am not sure. To understand how this is supposed to work, I have a related question: If you call `toPartialBase64` on the last chunk with `more` set to false and provide an `extra`, then what happens? + +KG: What are you calling it on? + +WH: Let’s say you have 5 chunks and on the last chunk, you pass `more: false`. + +KG: So that works fine. But if you don’t know how many chunks you have up front – + +WH: Yeah. It’s more of a clarifying question. If you say `more: false`, then what will you get output as the `extra`? + +KG: In general, if you say `more: false`, extra will be either undefined or an empty typed array. I haven’t actually . . . I forgot which of those two it was. I think an empty typed array. + +WH: In this thing, the `extra` is just unprocessed data — it has not been emitted because you’re not at an exact multiple of 3? + +KG: Yeah. Exactly. + +ACE: So yeah. I love this proposal. The offset parameters sound like a good idea. And they don’t sound – I do like the idea of keeping proposals simple and breaking them up where it makes sense. This subjectively, from my opinion, this doesn’t seem like a large enough change that it would – move this out of a simple, minimal API into something much bigger. I still consider this in keeping with a simple API, and especially as Jarred said, the workaround to this is potentially expensive in at least one popular engine so would seem unfortunate that there could be a period of time where people would have to do a slower path, and if there’s a small follow along and if it sounds like it would happen potentially quickly after this. Meaning that period of time would be small, this makes me less convincing of a follow along, instead of doing it as part of the original proposal. I wouldn’t block this or anything, but that’s my gut feeling when hearing the presentation + +MF: I wanted to confirm that it would be safe to do as a follow on for the offsets? + +KG: Certainly it would be yes. + +MF: My preference is that, without performance data, I would prefer to leave them out and add them later. + +KG: Ask for the committee, if I leave them out, and implementations go to implement, say we are concerned about this implementation because expected pattern, we could come back at Stage 3 and add them as taking implementation feedback? Would anyone be update with that change happening at Stage 3, if it’s a result of implementation feedback, I should say? + +ACE: My gut feeling is it’s nice when things change in Stage 2 rather than Stage 3 as a general theme. So . . . yeah. So that would be my preference. If we think we have clear semantics, I don’t see why we would risk a change during Stage 3, if we can very clearly avoid it. Gut instincts here. I haven’t thought deeply about it. + +RPR: DE has a reply + +DE: Yeah. I generally agree with what ACE is saying. It would be okay for this proposal to go ahead as is, but the goal of making this “bring your own buffer version” was to be easier to copy in the first place. And I am pretty confident it’s possible to construct a benchmark, whether it’s performance concern in applications is difficult to really assess. But what we could assess is how many engine implement the pattern of using inline ArrayBuffers. I believe V8 does this only for ArrayBuffers at that are pretty small. And in which ways, it’s cheap to copy. So I think to do a quick survey of – in which cases this is done in JSC, and how this works out in some other engines to understand whether this really breaks zero copy. + +KG: I guess I don’t really see the value in looking at other engines here. Like, if it is going to be expensive in JavaScriptCore that seems sufficient and I don’t care whether or not it’s expensive in SpiderMonkey at that point. + +DE: I would – I think it would be good to check whether JSC has the limit in length for inline ArrayBuffers as V8 does. Maybe MLS knows this offhand. + +MLS: I would have to look into it. + +DE: But I don’t think we can take every claim this has performance overhead completely literally. And it – yeah. + +KG: Okay. I guess I would request from the JavaScriptCore team feedback on this, then. I know that’s hard to ask. But I guess – well, I am inclined to do the thing that I said earlier, which is to not include these parameters initially. If implementations come back to me at the request for Stage 3 or during the implementation of Stage 3, this is in fact going to expensive, not just speculatively, then we can add them back + +DE: I am happy with that course of action. I encourage everyone to do that before Stage 3. Not just JSC. If it’s noticed after, we can figure it out also. + +CM: So given that the streaming API seems to have at least moderate amount of hair and there seems more discussion involved in getting it to converge, whereas the non-streaming API has immediate value and seems straightforward and noncontroversial, I am wondering if the two could be broken into separate proposals so the non-streaming version could go forward on a fast-track, while all the details that needed to settle for the streaming API work their way through the standards process. I don’t know how many iterations of plenary it takes to get this to converge. The non-streaming is good to go right now. + +KG: So the streaming API only exists in the first place because it was an explicit request from the committee. The first one didn’t have it because I don’t care about it. + +CM: It seems to us (Agoric) that 95% of the actual use cases are for the non-streaming API – I don’t know that for sure, but that’s our intuition. + +KG:Yeah. This was added at explicit request from the committee as like – Peter from Moddable I think it was, maybe other people as well, wanted me to do this as part of the proposal. Wanted to do the streaming API as part of the proposal. So seeing it was added to this proposal, at the request of the committee in the first place, I am not inclined to split it up, I might personally like to, but we already are this way because of committee request, so . . + +KG: I am hopeful, like it sounds like the decisions I presented are not super controversial which means the design is finished and need to write the specification text + +EAO: Briefly, we talked about this ahead of time and kind of came to the same conclusion as CM just mentioned. We find the motivation for the streaming part of the proposal to be weak and separating these, for instance, into two proposals would help ensure that the motivation for the streaming part would also be better presented. And also, allow us to possibly evaluate first implementing the non-streaming parts of the API and seeing if that actually does answer the issues that this is trying to solve in the first place sufficiently. + +WH: I am not in favor of streaming the process of adding a feature to the language. This is all one feature, we should figure out what its API is. I don’t care whether we do streaming or not, but if we do streaming, we should not split it into a separate proposal. + +DLM: It sounds like the champion is skeptical about the value of the streaming version and we were a little bit skeptical in our internal discussion as well. So I am wondering if, perhaps, the people who are in favor of a streaming version would also want to present it as a separate follow-up proposal rather than in this proposal. Thanks. + +KG: Is PHE in the room, by any chance? + +(plenary): No. + +KG: All right. Well, I don’t know what to do in this case. It was a request from him. I guess I will reach out to him outside of the meeting. And ask if he’s okay with splitting it out. I do agree with WH. Like, they need to be designed coherently. And consistently with each other. With that said, it sounds like we have a design with offset and we are happy with the streaming version. Such that, it could immediately advance to Stage 3 as a separate proposal with no further changes, if people were in favor of doing so, in terms of like motivation. So that while the design does need to be worked out, together, we could say the design has been worked out and we are choosing to only advance part of it at this time and later bring on the second part of the motivation is there. I would be okay with that split. But WH, does that sound reasonable, given that the design is effectively done? + +WH: I am a bit dubious because there is a risk that, whenever we do the homework on the streaming API, we will find that we regret how we did the nonstreaming API. + +KG: That is understandable. + +WH: I am not in favor of doing proposals via streaming. + +KG: That’s understandable. Okay. Well, I will try to continue to do the work on the streaming API, then. And, perhaps, when I next present it, we can – I can present it in two forms and say: here is either the more paired down or the more complete version. And we are agreeing that the more complete version is what we will do, if we do it. So we consider the homework to have been done at that point. And maybe that can satisfy everyone. + +WH: Yeah. I am ok with not having a streaming API at all. All it would take is, if you’re encoding or decoding, make sure you supply multiples of 3 or 4 respectively. Apparently the people who requested streaming are not in the room which makes it hard to decide what to do. + +KG: Yeah. + +LCA: Yeah. I agree with CM and Mozilla’s point about the usefulness of the streaming API. I think streaming can be useful, but in many cases uses cannot use it directly, they either an iterator or maybe on the web a readable stream that they want to decode or encode. And using the API will not make their life any easier. Implementing API on top of the non-streaming one by just doing the like only pass multiple of 2 thing, this is implemented in the transform stream on the web. It could be pretty easily implemented with a generator too. Yeah. + +KG: I don’t feel like I care all that much about the streaming API. But I feel the need to defend it here. It is useful for streaming cases on the web. Like I have this example here. Where this is how you use the streaming API with the transform stream and it is much simpler than trying to do it yourself. Like it’s definitely doable without the streaming API. You would just need to get pretty into the weeds of the details of Base64 and keeping track of the 3 characters input, 4 characters output distinction and making your views. You need to do concatenation which is annoying with ArrayBuffers, or with Uint8Arrays. If the first chunk is 8 bytes, then your set – you will have 2 bytes left over and put those to the next call. Which is just like an annoying thing to do. You have to make a copy of your entire subsequent chunk is you can prepenned those two bytes. And that is exactly what the streaming API is for. And it plays nicely with transform streams. I do think it’s useful. If we think that streaming is useful ever, anyway. + +LCA: Most users the complexity of implementing this transform stream is going to be, especially making sure that the implement is fleshed out correctly, it’s the same level of complexity as going through and implementing the manual version of this. Ultimately, people are going to copy this from stack overflow or import it through npm library. I think this is going to be very rare, I think. + +KG: Even if you are just copying it from stack overflow or whatever, you run into the problem where you copy every chunk. There’s no way to do the mod 3 additional bytes to subsequent chunks. Like . . . there’s overhead to do it, not just complexity. + +LCA: Sure. Yeah. I think this could – like, there is – let me get back to you on this, on matrix. + +DE: We have clear steps to follow up with Moddable, and perhaps Chrome, to understand what the use cases are. Moddable must not be thinking about HTML streams + +KG: Obviously, Moddable tends to work in constrained environments and environments where you don’t want to copy all the full data in memory. Certainly the streaming API is useful in any case where you don’t want to create a full copy of all the data in memory. + +TKP: Yeah. As people were asking about – the streaming API, just last week, I had to implement something like this transform stream to compress large JSON data into a URL appendable string (query string). I put them into messaging services because of the length of restrictions. And implementing this transform stream myself was kind of annoying and actually, I had to learn more about base64 encoding in detail than I ever wanted to. So yeah. I think the streaming API for Base64 is kind of useful. And not a waste of time. And to have an offset is also quite nice because you actually know where you are. So you don’t need to have a shifting window on top of your data. So if we have the streaming API, I would prefer the offset. + +WH: Having listened to the conversation we just had, I’ve been swayed and am now on the side that streaming is useful to avoid extra copying. + +KG: Okay. Next steps. That’s good feedback from both of you. Yeah. I agree that the main benefits for the streaming API which . . . not having to learn all of of the details of Base64 and avoid copies. I would like that clearer in the next presentation regardless of which direction we end up going with. And then I see that was everything in the queue. + +LCA: Yeah. I want to talk about this copy thing one more time. I think there is a path where you can implement this on top of the simple API, the non-streaming API that avoids doing a copy for the entire buffer every time, the most amount of data that you have to copy is 3 bytes. In a fixed allocated data array, where you put this extra bytes and if you get a new chunk you add one byte from the new chunk into the extra. And then you put that . . . and I don’t think implementation of this is super complicated. I can post one on matrix in a couple of minutes, if anyone is interested. + +KG: The thing that is complicated about that, right, that you could take this extra parameter that I have here, and move it into the one-shot API, but getting that parameter, the value for that parameter is gross . . . it’s certainly doable. But it’s not trivial. Especially if you don’t already have a bunch of familiarity with the precise details of 64. + +LCA: Maybe we can continue discussing this on a issue. + +KG: I should say there is . . . an existing GitHub issue with 100 comments on this exact topic. So maybe just post on the existing one. + +LCA: Sure. Okay. + +RPR: And EAO with the final topic + +EAO: What TKP represented as a user story, it sounds like a really decent motivation for having a streaming API. And getting more representations of something like this in the proposal would make it much easier for us to support it as a whole. If not, splitting in two. + +KG: Okay. Thanks. I will do that. + +### Summary + +A method for writing to an existing buffer as part of the streaming API, was presented, which had a couple of open questions including whether to have offset parameters. The committee was split on having offset parameters, but expressed no disagreement with my positions on the other open questions namely: +the decision to have separate methods for writing to an existing buffer the naming question, proposal to use `toChunked`. +Committee is not universally convinced to do streaming as part of this proposal. +There is no agreement on the use of the offset parameters. + +### Conclusions + +No universal agreement, whether the streaming API should be part of the proposal + +Have more discussions and then present a proposal with spec text for both the streaming and one-shot versions, so the committee can make a decision on the streaming versions. +Anticipate Stage 3 for one-shot or streaming version. +Await implementation feedback on if the offset parameters are necessary for performance + +## Explicit Resource Management (continuation) + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/tc39/proposal-explicit-resource-management/) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-Tko8bDqLrnYiAJRBw-Q?e=qImaQa) + +RBN: Yesterday, there was an open question about a concern brought up by someone. There's a way to amend this proposal to enforce `using` in certain cases. To avoid leaky dispose. The possibility of leaking disposables. I have made a couple minor amendments to the slides based on prior commentary, such as not saying that clean up is a must or a should, but a may. Also, amended the slide to indicate that a other option which might be a preferred option if the recommendation would be to report a warning or notify the developer of the leak rather than implicitly close. And as part of the discussion I mentioned I believe yesterday Python context managers are a basis for some of that discussion. There’s an additional slide in a different color to show it was not part of the original subslides I had in yesterday. Just describes what that approach looks like, where to implement it in case we wanted to talk about. We can go to the queue which I believe the chairs have restored from yesterday. + +LCA: Yeah. I wanted to comment on your note from yesterday about the base cleanup. I think we should not recommend this. This is in most cases a footgun. We've seen this … over the years we are seeing in Deno. For concrete example, the fetch API, the response body of a fetch API when you make a fetch call it’s cleaned up on GC, unless the user consumes the body. And we see very frequently in real applications that users will perform a fetch, look at just the headers, will not read the response body. And then start like consuming very large amount of memory, because they are not closing this response body. And need, are letting GC dealing with and it’s slow. And people will run out, using HTTP1 because the response body keeps the handle open, until the entire body has been read. Yeah. It’s generally a bad time. Like, GC based cleanup, I don’t think we should be recommending. Reporting a warning, seems more reasonable. + +PFC: Okay. Yeah. I agree with that. I would prefer not to recommend GC cleanup. I mentioned some things yesterday about embedding JS into context where the final GC when the embedding is torn down, it’s problematic to have dispose call backs occurring. I won’t go into the detail again, but it’s in the notes from yesterday. + +RBN: I do want to point out that the reason that the – there is an issue and PR in the repository to introduce a recommendation which again already has some feedback that is not – it’s not a recommendation that we want to pursue in that issue. The reason it was brought up was one of the initial concerns in [issue 159](https://github.com/tc39/proposal-explicit-resource-management/issues/159), was the problem with things like file descriptors and JS which are represented as a number value don’t show up in like a heap snapshot or dump as being related to a file handle. So they are hard to find when you are trying to diagnose issues with memory. That’s one of the reasons having some way to hold on to that handle and know it’s actually open was even being considered as part of the recommendation. + +DE: As an error-handling backstop, I think I agree that using a FinalizationRegistry to clean up stranded resources is probably good, even as we also want to *discourage anyone from depending on it. So, times when the FinalizationRegistry triggers should probably also print warning messages. Generally, I am happy that you brought up this concept of giving recommendations. I think we as a committee have more room to add recommendations. These can even be normative. Many other specifications include “should” text, even for developers, not just implementations. So this could be in our specification document. We can also communicate these in MDN documentation. For this particular case of the FinalizationRegistry API, we put some very prominent warnings to scare people away from using the API at all. I don’t think we gave detailed recommendation of this practice for using it. So I think we should make more of a habit of doing this, even if I disagree with the polarity of the recommendation here. + +RBN: And I will state this proposal does have a slight recommendation in the text already. Which is not unlike some of the text that we have more recommendations on how to properly implement an iterator, if you were hand-rolling iterator. In those cases it will say user should do – implementer should do this or person implementing this protocol sudden do this but it’s not enforced by the runtime. So things like saying, dispose should return nothing. You can return whatever you want, but we should essentially not care what you return and ignore it, which is one of the things we discussed yesterday, so they are good recommendations for best practices for how to do these things, but we can’t actually enforce that so we don’t. + +DE: Good work. Let’s keep doing this. + +SFC: Yeah. I will just say that I think the FinalizationRegistry solution is fine because – I think there’s definitely use cases that I have noted previously where the disposable interface can be available for cases where the developer knows that the object will be short-lived. There could be cases where a disposable tends to be longer lived. In which case it’s convenient to use the finalization registry to handle its lifetime. And like it’s fairly easy to write a wrapper thing. For example, it uses – puts it in before returning to the user. So I think it’s a perfectly valid design to suggest that author’s of libraries that make use of the disposable intercase also have the finalization registry. They feel it’s a valid use case for the primitive that they are releasing. + +RBN: This isn’t necessarily something we have to document in the specification. It could be just Q&A on StackOverflow. If I wanted to do this, how do I do it correctly? + +PFC: I think when I put this on the queue yesterday, that triggered a big of discussion for the chat, whether we could enforce through the language that you have to assign a disposable to a `using` variable. And maybe that’s not feasible, to do it through the language instead of through tooling. But what I would like to avoid is that we get into the situation where the path of least resistance is to leak resources. It’s a well-known problem with languages like C. In JS it's also possible to leak resources for example by having a closure hold on to them, but it’s not the obvious thing to do. I wonder if you have thoughts on doing this in the language. Is it worth having as a check for the `[Symbol.dispose]` method, if you assign a disposable to a non-using variable? + +RBN: I actually be opposed to that for a number of reasons. One, the performance cost for constant on its own. The other is that there are many, many use cases for disposables that live beyond a using block. That’s why disposable stacks exists. In most cases, in the he can ecosystem disposed today, a like concept, go to VS code because they have heavy – this is disposable and build up entire object graphs of disposable things through composition that have a long lifetime with the idea. When I deactivate an extension, that’s all the resources associated with that extension will be disposed. So the demanding that a dispose associated with using in the language would actually be counterintuitive to what dispose is supposed to. It’s supposed to allow you to lexically scope to the lifetime of a resource to a block. But the point of dispose is that it indicates that this object will obey a lifetime that is established – associated with something else. So disposable stack has a long-lived lifetime of multiple resources and that has short-lived lifetimes. They are two parts that are tied together, but using again does not demand dispose and dispose does not – dispose does not demand using + +PFC: I understand what you are saying. I don’t want to do that. Of course, if you didn’t have disposables and you had an open and closed method on some object, and you would leak a resource if you didn’t call close, there’s a possibility of forgetting to call close. I think that `using` makes it slightly easier to forget that because it’s possible to think, "this object is a disposable. So it disposes automatically." It’s not as serious as I was thinking yesterday. But I was uncomfortable that the path of least resistance is the wrong one. + +RBN: Languages can be helpful to enforce. That goes to the second slide here which was the discussion of something along the lines of Python’s context managers. And this came up in issue 159 which is linked at the bottom of this slide. The idea of being that if you had a mechanism to indicate that you have to – the user declaration would call this method to get the actual resource, such that if you are writing a whole – new API and want to enforce that somebody is using, you would use this symbol.enter to give them what they interact with. And it doesn’t force them – force the user to use using, but it leads them towards it. If they didn’t want to, they have to call the symbol.enter method themselves, which was – is the suggestion from 159. But the problem with that is, if it was something that was mandated as part of the using protocol, existing APIs already have a close method, it would have to essentially implement an enter method that just returned this; that’s what Python does. But in the same vein, the abstract base class that you use for context managers, the abstract context manager defines as the default implementation. Therefore, we could feasibly say that symbol. . . . this is just has a dispose. And if you want to option into stricter semantics for using, then you would add symbol.enter to your class and that allows the caller to either use using to get the value or option out of the semantics by calling enter themselves. + +PFC: Okay. I heard some negative comments about this yesterday, but I am very positive about this. + +RBN: There is a second part to this too, which is if we wanted to go the full Python context manager route which is something I wanted to avoid, the extra slide I added here. This is a full context manager in Python when you enter the context you get the resource. But the way you clean up the resource by calling the exit method on the context itself, not on the resource. Exit has the extra power that added too much complexity for the lightweight use case, which is the exit method in Python can receive the exception that if one existed, that caused the disposable to occur, and they can either replace that exception or swallow the exception by returning true, allowing a context manager to act like a try catch. That’s a lot of complexity for developers to do the wrong thing when doing the more simple one, I wanted dispose something, clean up, which is why I stayed away from the Python enter/exit approach. You could have lightweight disposable, that symbol dispose, and about heavy wear weight that have a symbol exit and using would be use exercise if it exists, use exit, if it exists. If exist doesn’t exist, I will use dispose with the semantics of this exit example here . . . and the run time or the code in here, essentially emulates most of the work the dispose resources do to aggregate exceptions as it processes multiple disposable objects when exiting a block. + +LCA: The point I wanted to bring up is, also on this enforceability. In Deno our test runner does not leak a file resource for any other native resource. And I think through tools like this, test sanitizers and tools that allow you to track the amount of open resources at a time, within a given process, can help a lot here. Especially if those things are integrated with DevTools. Showing you the list of all objects that have a disposed symbol that have not had it called, this would be useful to sort of find run away resources. So maybe part of the solution is Dev tooling and not the language itself + +MSF: Yeah. I support looking into enter/async enter as part of the proposal. Similar to, not exactly, but similar to what Python has + +RBN: I wanted to speak to that and I am over time, is that if the . . . I don’t really have an appetite for implementing full context managers, and if the committee is in agreement that full context managers might be a little bit overpowered or overcomplicated for ECMAScript, then I don’t know we need asyncEnter which part of the context manager design where you have an asyncEnter and asyncExit and they to be paired and always asynchronous. The simpler approach is that you just have an enter method that says, you have it use – you should use, using or call into this to do anything else, then I don’t think async enter is necessary. And the reason I say that, it has an impact on some of the API design for async disposable stack, an additional wait at an await using statement that I think would complicate things. If we were going to try to consider doing this, do we need this, it’s a matter of how far down the rabbit hole are we going with this capability? If it’s something that is simple as it’s just an enter symbol that is optional, then that doesn’t necessarily mean – that is something again, potentially to pursue as a follow on. Because it wouldn’t affect any existing semantics within the proposal today, that exists today. But if we are interested in something bigger that might mandate possibly a demotion to Stage 2, to consider the implications of something like async enter on zero the async side of the proposal. + +MSF: Okay. But it sounds like you’re – we can discuss enter, async enter. + +RPR: Thank you. So we are through all the queue now. + +### Summary + +No specific direction is concluded. There may be interest in investigating in async enter or in even the full context manager protocol. Not sure what impact this has on the stage of the proposal based on feedback. No recommendation can be provided at this time. + +### Conclusion + +Proposal is currently at Stage 3. There are a number of options to consider. It is still not clear the final direction to take. + +Two issues to consider: + +- Issue 1. The recommendation to use the garbage collector to clean up resources, which the committee is opposed. +- Issue 2. Use enter, which there were mixed opinions. + +This will be further investigated, and will bring a formal direction at the next meeting. + +## Temporal Stage 3 update and normative PRs + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-temporal) +- [slides](http://ptomato.name/talks/tc39-2023-07/) + +PFC: (Slide 1) Welcome back from lunch, whatever timezone you’re in. This is the Temporal presentation. My name is Philip Chimento. I work for Igalia and in partnership with Bloomberg. (Slide 2) I am here to give a progress update and present some normative requests. We have discussed in the previous two plenaries in March and May about an integer arithmetic change which I am happy to finally be able to present the concrete resolution for. I'll also present two changes that arose with changes with implementers. I am also happy to report that this means all the discussions we aware of are settled. So bugs can always be found but these are the last changes on our radar. And then, of course, the implementation is continuing. We know of implementations in JavaScriptCore, LibJS, SpiderMonkey and V8 in various stages of completion. + +PFC: (Slide 3) First, I wanted to give an update on the progress at IETF. The document is currently in last-call in the IETF. When I wrote this slide, there had been suggestions from the area directors and one complaint, which seems like it was able to be handled with a bit of discussion. So all in all this seems still like it’s on a path towards being published soon. But when it is, I cannot exactly say. The IETF process has been surprising to me at every twist and turn. I think depending on how comfortable implementers are at this point, we should go ahead and treat it as settled. That is my recommendation, but it depends on the level of comfort that implementations have with the context of the IETF document. + +PFC: (Slide 4) I will briefly present the normative changes that we would like to make. And then answer questions. Maybe if there are pressing questions on each one individually, we can take them after the slides but otherwise, let’s leave the questions for the end. + +PFC: (Slide 5) The long-promised integer math. To recap, the issue with integer math in durations was that we were doing arithmetic in the domain of mathematical values, and when you actually go to implement this, sometimes you would have needed to use unbounded integer arithmetic in order to implement it properly as described in the specification. So this is something that for a long time we figured was okay. But eventually, we became convinced that this is actually not good, so we wanted to eliminate it from the proposal, and the best way to do that is to place upper bounds on the individual values in a `Temporal.Duration`, other than the implicit bound of `Number.MAX_VALUE`. So the solution that we have come up that I am presenting now is that we don’t change the storage of `Temporal.Duration`. In memory, it’s still one f64-represented integer for each unit. That means, for example, you can still have two distinct durations of 90 minutes and 1 hour, 30 minutes, unless you explicitly convert one to the other. But when we perform calculations with time units, when we convert them into a normalized form in nanoseconds which must fit into a 96-bit integer, the value of the seconds part of the normalized quantity is less than or equal to the max safe integer and the nanoseconds part is not allowed to overflow 1 second. + +PFC: So if you implemented this, you can do it as one 96-bit integer if your compiler has it. Standard C++ doesn’t have a 96-bit type, so you would have to have a 64 and a 32-bit integer to implement. You could store the seconds part in a 64-bit integer and the nanoseconds part in a 32-bit integer. If you were writing an implementation in JavaScript you could store the seconds part as a float64 and nanoseconds part as float64. There are various options depending on the environment how you would implement arithmetic on this quality. Then at the end of the calculation, we convert back to the storage form which is one float64-representable integer for each unit. + +PFC: By time units, I mean days, hours, minutes, seconds, milliseconds, microseconds and nanoseconds. The footnote is "why are days considered a time unit?" That’s because we allow calculations with durations without a calendar, and we assume that days are 24 hours in that case. There are actually 2 ways to do it. If you make it relative to a ZonedDateTime, a day may not be 24 hours. In that case, it’s handled differently. In 24-hour days, days are time units. Weeks, months, and years, you cannot convert without looking at a calendar. (Slide 6) For the calendar units of years, months and weeks, we place an upper bound on those where the absolute value must be less than the maximum 32-bit unsigned integer. This is in order to be able to use a different algorithm for calculations with calendar units that doesn’t involve calculations in a loop, which was a concern raised by I think the SpiderMonkey team, during review of the Temporal patches. + +PFC: So since these are signed quantities, why not maxint32? It’s common to all the units. You can store in a separate bit and use an unsigned 32 for the quantity. So if you like, you can also store these units as unsigned 32-bit integers instead of float64s and save a couple of bytes. + +PFC: This range is well above what you need in order to perform calculations relative to a date. Because 2**32 - 1 years is long past the representable range of dates that we allow, which is about a quarter of a million years in either direction from 1970. + +PFC: (Slide 7) In March, we had some discussion about whether to express this in the specification as just regular operations on mathematical values and note they are fit in 96 bits, or also explicitly spelling out the steps you have to do if you are going to implement this in, say, 64 + 32-bits. In the normative PR that I'm presenting today, we have chosen the former, because there are several ways to implement it. There was some disagreement last time we talked about this on what was better. So if, if the committee or the editors decides to explicitly spell out the steps on how to use this 64 + 32-bits it’s an editorial change. We don’t have to use plenary time for that if we prefer to express it differently in the spec. + +PFC: All in all this had a fairly small effect on the actual code that would be written in the wild using Temporal. It has only a few observable effects, mostly around durations where one component is too large and no longer allowed, but doesn’t change much the behavior of the APIs at all. I have a code sample here showing the top half is durations that are no longer allowed. So you can’t have `Number.MAX_VALUE` seconds anymore. You can have `Number.MAX_SAFE_INTEGER` seconds but no milliseconds on top of that. The bottom half shows examples of the kinds of durations that are the maximum now accepted. So, for example, you can have max u32 weeks, month and weeks and a number of days, hours, second, millisecond, nanoseconds that works out to MAX_SAFE_INTEGER.999999999. And then the bottom line is basically the same duration but different units and negative, so like `-Number.MAX_SAFE_INTEGER` and `-999999999` nanoseconds. + +PFC: (Slide 8) The second change is to limit the precision of offset TimeZones. To recap the background here, TimeZones can be either named or they can be UTC offset. The named time zones, they are taken from the time zone database, we will hear about that in JGT's Canonicalization of TimeZone presentation after this. But this is not about those time zones. This is about time zones that are constructed from a UTC offset that is fixed, does not change, no daylight saving rules. So an example of one of these TimeZone . . . "+01:00". A fixed offset of UTC+1. Previously, in the proposal, we allowed you to create the time zones with UTC offsets up to nanoseconds precision. For example, this string here with `.00000001`, we overlooked that the IETF string format does not allow this precision of offset strings. We figured it’s better to limit the precision now and relax the constraint now than to go back and try to make another change to the IETF document. This is not named time zones. There are time zones in the IANA database that have an offset that is not whole minutes and those continue to work as they did. It’s offset time zones, like this little code sample here, which are affected. You are not allowed to put a number of seconds in the UTC offset. + +PFC: (Slide 9) The third normative change and this is probably relating very closely to the presentation that we will hear from KG about "stop coercing things". Some of the test cases that ABL wrote for the Firefox implementation of Temporal uncovered this concern, that led to to make a change in the way we coerce strings to dates. So the story is that some numbers are, if you convert them exactly to a string they are valid ISO strings. Here is a number that converts to yesterday’s date if you use `toString` on it. Some numbers look like they might, but they don’t convert to valid ISO strings. For example, here is one that starts with a zero. It's an 8-digit octal literal, in base 10 it’s a 7-digit number. So not an ISO string. This is an evil trap to fall into. So the change we are proposing is not to use `toString()` on this sort of input. But require that if you pass a primitive, it’s a string. And that also changes the semantics of what error we throw there. If you pass the non-string primitive, it’s a TypeError rather than a RangeError, from when it was converted to a string that was not allowed. + +PFC: (Slide 10) Here are some code samples of that. This top one is the silliest one. It used to be able to create a calendar from the number 10. And that would end up as the ISO8601 calendar, because 10 was parsed as a time string for 10 o’clock. And without an explicit calendar annotation, it has the ISO calendar. It was silly that this was allowed in the first place. You used to be able to create a TimeZone from -10 because it would be a fixed offset TimeZone of UTC-10 hours. Here is the date I shared before. Then the bottom one shows an example of the change in the error that we are throwing. It used to be if you created a date from boolean value `true`, it converts to the string `"true"`. A `RangeError` because "true" is a string outside of the set of strings that are accepted. And it’s a `TypeError` now because it’s not a string. + +PFC: (Slide 11) I will take questions on any of these now. + +DLM: First of all, thank you for your continued work on this. We support all of the normative changes. I wanted to say, we fully understand why you are limiting to minutes for offset time zones, but in our internal review, it was pointed out the iCalendar format supports seconds, so we might want to expand to seconds in the future. + +PFC: Okay. Thanks for bringing that to my attention. I didn’t know that about the iCalendar format. So you’re okay with limiting it right now and expanding it later? + +DLM: Yes. I understand your reasons for limiting it for now. + +SYG: Normative changes look good to me. I didn’t understand the bit about the string literal confusion. How does limiting coercions help prevent that confusion. You can still pass a number that looks like a string, but it isn’t. + +PFC: No, if you pass any number at all, we will now throw a `TypeError`. + +SYG: Numbers were never accepted. I understand. I see. + +PFC: This `"05000101"` is fine as a string because it was a valid ISO string. + +SYG: It’s not the new thing that accepts numbers or strings. Just no coercion, but it only accepts strings. + +PFC: That’s right. Yeah. + +WH: I have a question about the slide about the TimeZone resolution. It stated that the named time zones can be sub-minutes precision, giving `Africa/Monrovia` as an example. But `Africa/Monrovia` has a UTC offset of zero. + +PFC: It does currently. But in the 1970s it had an offset of -0:44:30. + +USA: Nothing super serious, but I wanted to add a little bit of context for the exclusion of . . . this was included draft informed by Temporal, because the original grammar was basically taken out of the Temporal spec. However, it did come up during IETF review and people strongly suggested that sub-minute offsets were not a good thing for us to support moving forward, and that unit was a good idea to restrict ourselves to minute precision when dealing with offsets. So that is the reason why we excluded it. + +MLS: I wanted to clarify. In slide 10, if the first 3 were strings instead of numbers, they would be accepted? + +PFC: That’s right. + +CDA: Nobody else in the queue at the moment. + +PFC: Okay. Then I would like to move on to request consensus on these three changes. + +CDA: All right. We have some support in the queue from CDA, from DLM, from SYG: supports consensus on normative changes. We also have a +1 from LGH. + +DE: I support all the changes. I want to take a minute to dig in more to the feedback from Mozilla. The champion group took time debating the different alternatives, including supporting second or subsecond offsets. It’s not like technology to support second or subsecond offset is developing over time–we should just make this decision now or in a future meeting coming soon. I don’t see why it should be a follow on proposal. + +PFC: So I think like there’s nothing on our side stopping us from keeping it the way it was with nanosecond precision. Except for the string format. Maybe it’d be contrary to the spirit of trying to get our string format standardized, and then immediately add an extension to it that’s not in the standard. + +DE: Yeah. One thing I wanted note to Mozilla is that Temporal still does allow TimeZones to have sub-minute offsets. This is about the built-in Temporal.TimeZone class that when parsing from a string only allows a minute granularity. So maybe custom time zones is enough to enable the iCalendar case. + +DLM: I would respond to that. That was feedback from Sean Burke. He works on the Thunderbird project on calendars, I can’t speak to details of how important this was to them. So I think I can’t really resolve this question at this meeting. I would need more feedback from them about this. + +PFC: I think that it is correct you could use custom TimeZone to provide a fixed offset that was not aligned to a minute boundary. + +DLM: Just to follow up. I will ask him to get in touch. And he certainly had no concerns about supporting the normative changes, but I wanted to bring it to the committee’s attention. + +DE: Okay. Yeah. Thanks for raising this. So I support consensus on these three things provided that we follow up soon. + +JGT: Yeah. A custom TimeZone can have any offset down to nanoseconds, so for anybody that runs into this limitation, there is a pretty straightforward way to work around it. + +CDA: All right. Seeing nothing else in the queue, so I believe we have consensus on the normative changes. + +PFC: I have written a summary and can copy it in myself and following up with the offset time Zone. + +DE: I think this is a really big milestone. We do not anticipate any further normative changes, and from the IETF perspective, we don’t think that it changes in the IETF side are to be waited on further. This is a huge milestone, and modulo the one thing to follow up on [second granularity timezones], which I suspect we will quickly find it doesn’t need any change, Temporal can be considered kind of "done-ish". I don’t think implementations need to wait on any additional changes, or at least implementations shouldn’t anticipate any additional changes at this point. Hopefully this will be demonstrated in the next meeting when we'll come back with no changes and if PFC agrees, I think we should capture this as part of the conclusion. + +PFC: I can add that as well. + +### Summary + +- Consensus on making normative changes to: + - Remove arbitrary-precision integer math and calculations in loops (PR [#2612](https://github.com/tc39/proposal-temporal/pull/2612)) + - Limit offset time zones to minutes precision (PR [#2607](https://github.com/tc39/proposal-temporal/pull/2607)) + - Require ISO strings and offset strings to be Strings (PR [#2574](https://github.com/tc39/proposal-temporal/pull/2574)) + +It agreed to follow up with Sean Burke from the Thunderbird team about use cases for sub-minute-precision UTC offset time zones. (https://github.com/tc39/proposal-temporal/issues/2631) + +### Conclusion + +All known discussions are now settled. At this point, we do not expect that implementations need to anticipate any other normative changes to Temporal, and we do not expect that the remainder of the IETF process will necessitate any changes. Barring bugs found by implementations, we can consider the normative work on the proposal to be done. + +## Time Zone Canonicalization for Stage 3 + +Presenter: Justin Grant (JGT) + +- [proposal](https://github.com/tc39/proposal-canonical-tz​​) +- [slides](​​https://docs.google.com/presentation/d/1MVBKAB8U16ynSHmO6Mkt26hT5U-28OjyG9-L-GFdikE/edit#slide=id.g22181d24971_0_41) +- [spec](https://tc39.es/proposal-canonical-tz/) + +JGT: Firstly, again I want to – this is an obscure topic and I am grateful the committee is willing to spend time on this and thanks for having me. And it’s 4:30 in the morning for me. I am a little slower than I would normally be. So I want to apologize for that in advance. But hopefully I can keep the energy up. So today, we are talking about TimeZone canonicalization and hopefully getting to Stage 3. + +JGT: So first, we will talk about today is just a recap of the proposal. It hasn’t changed very much since Staged 1. But I will go through what is going on during Stage 2, and try it recap the spec text changes to keep it fresh in everybody’s mind, and ask for Stage 3. + +JGT: Reminder about the scope of the proposal and what we are doing, it’s based on the IANA, TimeZone, for ECMAScript and for everybody else in computing. There’s a database called CLDR, a database of metadata that is used for localization purposes including TimeZone data. CLDR takes updates from the IANA database and includes them in their own data. There is another API called ICU which ECMAScript engines call into. This proposal is based on time don’t know identifiers like Europe/Paris, Pacific/Auckland. Two kinds. Primary identifier zeros which is the main – the main identifier for TimeZone. It’s called a zone in the TZDB and also nonprimary identifies. So a good example is Asia/Kolkata for Asia/Calcutta is link that points to nonprimary identifier that points to Asia/Calcutta. Not all time zones have nonprimary identifiers but some do and that’s one of the focusses of this proposal. Now, in terms of manipulated TimeZone identifiers there’s two variations on this. One is case nomralization. So the TimeZone did a is a sense sensitive Dai base. If you give it America/Los_Angeles. Big L, A. That's an important implementation. I don't want to store the string that the user provides and so for us storage efficiently it’s important to do case normalization. This is about canonicalization. So what this is, is like in 2022, Europe/Kiev was renamed Europe/Kyiv. And so figuring out how we handle those kinds of cases where there are multiple identifiers corresponding to a particular time zone is the focus. + +JGT: So reminder this user complaint slide, you seen it a few times. The current stat state is not great from the perspective of developers. So the common theme is why are you – the people for sensible reasons, prefer to name their countries and cities the names they want. And not the names some colonizers from hundreds of years ago decided to call it. So this is actually just a pretty small sampling of what you can find on Google for people complaining how how this works. How this proposal will reduce in the future these kinds of complaints. So to summarize the problems that exist today: one is there is divergence between implementations. If I pass this line of code into firefox I will get Asia/Kolkata. If I pass it into Chrome or Safari or Node, I get Asia/Calcutta back. And so it’s because the latter uses CLDV and the former uses IANA which IDs are canonical. ECMAScript will change the values that programmers give it. So if you pass in Europe/Kyiv or Europe/Kiev, you might get one or the other back, depending on what implementation and at what time you’re making the calls. That’s confusing and/annoying when you have automated tests that you don’t want your snapshot data to change. So it’s frustrating. I mentioned before that developers are reasonably update that their cities are not called what they think they should be called. The final thing is that because there are multiple identifiers for the same TimeZone, you can’t use `===` to compare them. You need to have some code between to do an accurate comparison. These are all bad. But they will get worse. Because Temporal makes these problems worse because, the Temporal.ZonedDateTime class when you serialize into a string it shows you the ID. In the browser console, debuggers convert to JSON. When you store in a database. Logs. So we have already seen and today it’s like a JavaScript detract to get the identifier today’s. You have to dig in keep and call levels down. Whereas, with Temporal it’s going to be right there in people’s face. We will see a lot more of dates and time of the essence that we – before Temporal gets wide adoption, it’s great to the proposal out there so that we can vent the compilation of the problems we saw in the last side. + +JGT: So here is what we did during Stage 2. We landed the two editorial PRs, refactoring how TimeZone identifier are dealt with, both in ECMA262 and in the Temporal spec. These are now landed. Thanks a lot – so much to especially to the 262 editors for spending so much time with me to ensure that went through. We finished tests. Tests aren’t required for stage 3, . We had a lot of discussion in the repo, you know, probably we have 20 or 30 issues in there. And we had 2, TG2 reviews. The summary of the TG2 reviews, we are not going to expand beyond stage 1 and 2. There were ideas on how to expand the scope. But we weren’t really able to get consensus on any of the things that were actionable and some are not actionable yet. And so the plan; we will specifically have the proposal as is. And then other future-related changes would be normative PRs in ECMA402. Because it’s time sensitive, we want to get the changes out before Temporal is widely adopted. There’s no normative spec changes. To run through it quick here. + +JGT: So as reminder, this is the same proposed solution slide that we presented last time. I will run through and give you a status update of the pieces and where API changes that reduce the impact of canonicalization. And so changes to how canonicalization works in the IANA. The first piece is to stop canonicalizing user input. If I do Asia/Calcutta, and get it back. If I do Asia/Kolkata, and get it back. And that one change is actually probably the biggest thing we can do to reduce the level of user complaints because, you know, we’re not changing people’s data anymore. The next is to expose a new public API to compare two different TimeZone identifiers to see if they represent the same TimeZone. And so for both of these API changes, the spec is complete. The tests are fully – the full surface of the API. Passing. Reviewed. And the spec text hasn’t changed except for minor editorial tweaks since Stage 2. + +JGT: So the next piece of it is to provide guidelines for implementers of how they should teal with future changes to the TimeZone database. Especially in the – when we have future renames, because they tend to be the most destabilizing. Like Kiev to Kyiv. There’s a recommendation into a note, similar to the note we had in Stage 2, we wordsmithed it a little bit. But there is one piece of this – we sort of had hoped we also be able to include, spec text how you should align cross-engine set of canonical IDs. We are not there yet. Reason is that, engines generally rely on CLDR and ICU to handle the interaction with the TimeZone database. CLDR has agreed to expose IANA canonicalizal values out to the world. They are moving – they are moving slow. And so we are not on the time frame we need this proposal to go, I don’t think we will get that support from CLDR and ICU in time. We are going to continue to encourage folks to support this and continue discussing it but wait for for any spec changes with that to really wait until that support is available. Thankfully because we’re making the API changes it reduces the impact of this – of these kind of things. Even though we are not engines will act the same, the scope of which they will act differently is going to be a lot smaller with this proposal API changes. + +JGT: So with that, we are going to – if this proposal reaches stage 3, I will port a bunch of issues out of repo here in ECMA402 because they will become 402 issues going forward. It does not mean that we will to the forget about them. We will continue to pursue them, but not put them in the scope of the proposal to get this out. + +JGT: So here is a reminder of the spec text, 5 places currently in the Temporal spec and in the current ECMA402 spec where canonicalization of TimeZone identifiers. In these places, we will replace the canonicalization with the original case normalized ID that the user provided. Here is the Temporal TimeZone constructor, the internal version. Here for the zone date time type, when it’s storing its TimeZone slot. Here is the.toLocaleString. And the Intl date time far not. Each move from the storing the canonicalization value to the case normalized value at that not canonical analog but what the user provided. + +JGT: So because we took canonicalization out, we need to add it back when we compare 2 time zones together. The triple equals case. Internally, we avoid that today because everything is canonicalized. Once we stop canonicalization we need to compare for quality. There’s editorial tweaks to match and other editorial changes on the Temporal side, but this is the same as you have seen before. + +JGT: Finally, the last API change is to expose this new public API that takes this abstract operation that is used internally and exposes it outside to users so they can compare and replace their now buggy use of triple equals are more viable alternative. + +JGT: And the final spec change is a note for implementers to explain what they should do when we get a future renaming change like we saw last year with Europe/Kyiv. As a note, these renames are really rare. They average less than once a year. There was one in 2022. There was – I think 3 in the last 8 years before that. This is not something that happens all the time. But when it happens, Android had the good idea which is, they added the new name as a nonprimary identifier. So they left Europe/Kiev as is. The reason why that his helpful, if you switch over, let’s say, the TimeZone data database change is now and you start sending out Europe/Kyiv to everybody. Your communication partner on the other side might not be updated yet. And that’s what happened in 2022. And so by this approach, you actually wait, in this case, Android waits for 2 years. That’s a reasonable amount of time. During Stage 3 we will get feedback from implementers of whether that’s at right am of time. After that, you swap them. Then Kyiv becomes the primary identifier and Kiev becomes the nonprimary one. And that way, you will have much less chance that your communication partners will say, what is Kyiv? I have never heard of that. This is a recommendation. This is not something we are going to require because I think there’s interesting cases like in some cases the operating system is using the same values as ECMAScript does. But this is sort of the general recommendation. + +JGT: So status for Stage 3 is that the spec is done, tests are done, polyfill is available. Thumbs up from 3 of the editors. There’s one more editor there and I am hoping that they will be able to review the 20 lines plus a few paragraphs. I think we have met the criteria for Stage 3 I hope. Let’s open the queue for questions. + +DE: So I am a big + 1 to this proposal. The managed change at both the Temporal and Intl levels and the scoping, just moving on without primary identifiers being totally settled. And doing what is possible. So yeah. + 1 to Stage 3. + +DLM: SpiderMonkey team supports this for Stage 3. Great work on this + +SYG: I want to ask a question about the note. So the waiting period thing, is that at the ICU level? Like how Android implements it, do you know how Android implemented the waiting period? + +JGT: They have a – because Android ships infrequently, they scheduled the change for the next major release, which happened to be two years after the change was made. So I think this is one of those things we will work out in Stage 3. DLM and Anba and I have started to have that conversation but I think this is sort of one of those, got a figure it out during Stage 3 because implementers will tell us the right way to do things. Does that work for you? + +SYG: I am somewhat skeptical of having non-normative notes about recommendations of release cadence if it comes down to that. Yeah. We will see how it shapes out, I guess. A lot of this might – the decision-making might not be part of the engine itself, but the broader thing that the engine is part of. And I am in the sure how much force having in 262, can have versus like the browser level or delay or something like. I am happy to see how it shakes out + +JGT: Makes sense. Just as a note about the 262 versus 402 split. This is something that the editor of 262 editors and I and Richard spent time talking with, and the tentative split we have now is anything dealing with the IANA database lives in 402, because it interacts with ICU and that’s the main source. Whereas the core, you know, dealing with TimeZone identifiers and primary versus nonprimary lives in 262. So you can think it as like 262 is the base that defines the concepts and 402 is the place where the rubber meets the road in deals with the IANA TimeZone database. + +PFC: Yeah. I want to say, I was one of the Stage 3 reviewers for this and I support for Stage 3, and also, I want to take a moment to congratulate JGT on the great work. This all started from a small discrepancy between engines that we noticed early on with Temporal, with how Intl.DateFormat works. I was convinced, this is just a browser bug. And slowly over a year, JGT convinced me it was not and it needed addressing and I think yeah. This is an example of really good work. Thanks. + +JGT: Had I known how much work it would be a year ago, I am not sure I would have done it. But I am glad it’s done. + +LGH: I support for Stage 3. Since you mentioned you want to get this out ahead of Temporal, I am curious how it works in practical terms since it has a dependency on Temporal itself. + +JGT: There is those questions in the slide. I definitely didn’t mean to imply it should be ahead of Temporal. But rather, before Temporal is widely adopted. I would like to make sure it’s out. I would to defer that question. Let’s see if it’s right . . . ed we can talk about now since we have the queue open. That’s the biggest question I have of what is the right way to stage this relative to Temporal. What I don’t think we should do, make sense to try to push this proposal out ahead of Temporal because so much it is stacked on top of Temporal. We might change DateTimeFormat in the meantime, but that doesn’t make sense. So the main question I would have, in feedback for the committee, assuming we get to Stage 3, what is the right way to do it? One way is to wait until Temporal gets to Stage 4. And then do it. Or we could say, implementers, hey, you have one thing to worry about. And you should assume that this lands on top of Temporal. You just just build them as one thing. I like Option 2. I don’t have a strong feeling how to do that whether we should literally merge this proposal spec text into the Temporal text, or put notes in the Temporal spec says "this has been superseded by this other proposal. Go look here." That’s kind of somewhere in Option 2 would be how I would like to do it. If you have a suggestion or proposal, I would love to hear it. + +LGH: Merging back into Temporal would make sense. + +SYG: I have thoughts on the shipping topic. + +DE: I vote Option 2. As LGH said, option 2 sounds nice. I previously advocated for separation of these proposals because I didn’t understand the space as well. And now either merging, or implementing and shipping at the same time seems like a good way to go. + +PFC: I think what I would like to see we take the parts that apply to Intl.DateTimeFormat without Temporal and merge those into the 402 spec when this reaches Stage 4. This is a discrepancy and that existed even before Temporal. And then the rest we can just merge into the Temporal specification. + +JGT: And just as a question, do you think we should do that like – assuming this proposal reaches Stage 3, what are your thoughts on that? Do it tomorrow, or wait for Stage 4? + +PFC: That’s a good point. I don’t know how the committee feels about combining the proposal. If both are Stage 3, we could put parts at Stage 3 and then still have a separate PR for Stage 4, for the parts that apply only to DateTimeFormat. That would be my first intuition. I don’t know if that’s too complicated or what? + +JGT: One potential complexity is that the Temporal spec also makes changes to DateTimeFormat so there’s a little bit of merge weirdness there. Do you have any thoughts, would it make sense to bring the proposal into DateTimeFormat and Temporal eventually layers on top of that or should we just make the changes in the Temporal spec that Temporal is making to DateTimeFormat and it goes into one piece? What do you think? + +PFC: That seems reasonable, but it might turn out to be too complicated. The motivation for this is that I think it would be good to address the browser discrepancy independently of Temporal. As you noted in the beginning of the presentation people are upset that their city is called by the wrong name when they see it in DateTimeFormat. That’s the motivation. If it's too complex, I would also be fine with Option 2. Either one of them. + +JGT: That makes sense. Why don’t with do: take that as a open issue and discussion – I don’t want to take the committee’s time for that and essentially figure out what is the right – we are in agreement at least what I have heard to far and if you want to object, to merge this proposal into the current Stage 3 memberrize spec and leave as an open issue how to deal with DateTimeFormat and discuss in TG2 and sew what would work best for implementers. Does that work for you? + +DE: I think that’s a fine conclusion. + +SYG: This addressed my concern. I will say my concern: if this remains separate from Temporal, and there are two separate spec text and two proposal repos at Stage 3, this is not something to ship ahead of Temporal because then we will have shipped this without having the equality operator to compare. I want to confirm my understanding of the agreement. If so, merging this spec text into the Temporal spec text addressing my concern. If there’s no two separate spec text and no way to decently for someone who was not here at the meeting who understands to ship independently, I support Stage 3. If there is a risk, then I would rather that doesn’t appear as3 + +JGT: I definitely agree what SYG said. It’s the simplest – having thought through some of the option zeros, let’s merge the proposal specs text into Temporal and not try to do anything before Temporal ends. There’s a bunch of moving parts. We can take that off-line and figure out the right way to do. I am inclined to agree with you that that is the right way to do. + +SYG: The mechanisms of how to do it, we can take that off-line. I do not want to take the bigger decision to merge into Temporal off-line because that decides whether I support Stage 3. Or like a 2.99 or something where it’s like, yes, everything is done. But we are in the actually signal shipping independently because it doesn’t be. + +JGT: This should not be independent of Temporal. Does that address your concern? + +SYG: I was explicit consensus around the point of merging it into the Temporal spec. + +DE: This is a point where PFC made the opposite arm. If an implementation decides that they will take will long time, it’s valid to ship the Intl part of it sooner. But it would be valid for them to ship Temporal together with this. What’s not valid as to ship Temporal without this thing. So I think – + +SYG: No. I am saying it’s not valid to ship it without – okay. Yeah. + +DE: That would be great option for V8 to adopt the V8 looking for what it should do. + +JGT: But I think SYG’s point, which makes a lot of sense is, if you stop canonicalizing users have no way to compare to TimeZone values. Because there is no equality operator. SYG convinced me we should not ship this in date time for mat because it takes away functionality and that’s bad. + +DE: I see. I am happy with that condition conclusion. And I think if we get consensus on 2A, that would make it fully solid. + +PFC: +1 + +SFC: Yeah. I support the proposal, but I also hope that we can reach consensus on the rest of the issues because they should be in the proposal. I understand that it's important to get the proposal through. So yeah. + +JGT: Can I ask for consensus of Stage 3 and Option 2A? + +CDA: Do we have explicit support from anyone for stage 3 for Option 2 A + +SYG: +1 from me. Stage 3 with option 2A. + +CDA: All right. You have some support in the queue from KG, from EAO and CDA. Any other support? Any objections? We have one more +1 from USA. + +SYG: I have got a suggestion for Justin. Option 2 A, to minimize the risk of someone not in the room and actually implementing and shipping it, could have the rendered spec text redirect to – even if makes sense to read independently, have it redirect to the Temporal section instead? + +JGT: Yes. I have to figure out the right way to do that. A meta question, does this proposal exist anymore or now just part of Temporal? Will we every to get to Stage 4 independently? + +JHD: The way I had interpret it, I am asking – saying in the context how the proposals table is updated, is that we can probably move it to the inactive proposals list with a note saying that it was merged into Temporal at Stage 3, and never put in the Staged 3 list and this proposal would no longer to do for those not in the room until after the PR is opened to do the merging. As far as the redirects and stuff SYG is asking for, I am happy to help you brainstorm mechanics for that. + +JGT: That would be great. Does that sound like a good plan or are there other things that people would want to see? We are running out of time. + +SFC: Only a sliver of this proposal is Temporal; it has long been a standalone proposal for 402. It solves problems on its own merit. I'd be in favor of keeping the proposals separate. + +DE: The reason is the missing equals method. Right now they are canonicalized, you can compare with `===` and you will have this missing capability without Temporal. It’s an ugly intermediate state that we shouldn’t expose to developers. + +JGT: I might try to get like 10 minutes tomorrow because it sounds like there is some – there’s still some open issues here around the – we will follow up off-line, it might be worth getting a few minutes tomorrow. + +CDA: Yeah. Justin would you like to dictate a summary for the notes + +### Summary + +- Presentation: + - Problems to solve: + - Implementation divergence + - ECMAScript unexpectedly changes programmer input, e.g. Europe/Kyiv ⇨ Europe/Kiev + - Developers are upset by obsolete names, e.g. Calcutta, Kiev, Saigon + - `===` is unreliable for comparing IDs across engines, platforms, or time + - Temporal intensifies these problems by making IDs more discoverable + - e.g. Temporal.ZonedDateTime shows ID in console, debugger, logs, JSON, etc. + - During Stage 2: + - Landed editorial PRs: tc39/ecma262#3035, tc39/proposal-temporal#2573 + - Finished Test262 tests (and built polyfill to run them) + - Implementer discussions in GH issues + - 2x TG2 reviews, with outcome of not expanding spec changes beyond Stage 2 text, because further changes didn't have TG2 consensus and/or were dependent on CLDR/ICU changes that aren't ready. + - Status of planned work + - API changes: spec complete, tests complete + - TZDB identifier guidelines for implementers + - Handling future changes (esp. renames): spec note complete, no tests needed + - Help implementers converge on a cross-engine set of canonical IDs: not complete, waiting on CLDR (won't ship in time for this proposal). Will follow up in ECMA-402 separate from this proposal. We'll move open issues to 402 repo. + - Spec changes: + - Stop canonicalizing in 5 places in the spec + - Add canonicalization to TimeZoneEquals + - Add new Temporal.TimeZone.p.equals method + - Add a note that recommends a 2-year waiting period before a renamed ID becomes primary (idea borrowed from Android) + - Stage 3 status + - Spec, tests, polyfill: all complete + - Stage 3 reviewers: approved + - 3 ECMA-262 reviewers (SYG, KG, MF): approved + +### Conclusion + +- Proposal reaches Stage 3 🚀🚀🚀 +- We'll merge this proposal as a normative PR into Temporal Stage 3 spec so that implementers only have one thing they need to implement. JGT will author this PR ASAP. +- Implementations should NOT implement this proposal's changes to Intl.DateTimeFormat until Temporal.TimeZone.p.equals() is implemented, because without equals() there's no way to know if two time zones are equivalent. (Also because it'd likely break all Temporal polyfills.) +- We'll change this proposal's repo and rendered spec text to avoid confusion with Temporal. What specific changes those should be will be worked out with JHB, SFC, Temporal champions, and maybe others. +- We tentatively agreed that this proposal will be subsumed into Temporal, but SFC had concerns around that. JGT will follow up with him to understand those concerns. We may get extra time tomorrow to work things out further. +- We'll continue working with TG2, CLDR, and ICU on further changes to align implementations' canonicalization behavior. We'll port issues from the proposal repo to the 402 repo, and will propose normative changes to ECMA-402 as they are unblocked. + +## Source Phase Imports for Stage 3 + +Presenter: Guy Bedford and Luca Cassonato (GB) & (LCA) + +- [proposal](https://github.com/tc39/proposal-source-phase-imports) +- [slides](https://docs.google.com/presentation/d/11vSrS7-112rb2zJxpBpKnSj4XUyOy-6w54neQSStJ-4/) + +GB: Presenting again source phase imports. Hopefully for Stage 3. We have had the Stage 3 review PR up for some time. And we can just give a quick recap of where this proposal has come from, how it’s developed and the current decisions and some of the recent decisions that were made in the last meeting and discussions that are followed up from that. + +GB: So to summarize the syntax, this is the syntax for the proposal. It’s both a static syntax and a dynamic syntax for importing modules in their source phase. And in particular, this is a new reflection of the loading pipeline in different phases of the module loading process, where getting access to the earlier phase unlocks new capabilities both for JavaScript and other languages that integrate with JavaScript. The motivation for the proposal initially comes from WebAssembly where there is an explicit distinction in WebAssembly between the source and the instance that you have WebAssembly.module .and webassembly.instance and the next a link and executed instance. And because of that distinction, we want to be able to get a hold of a WebAssembly.Module object through the module system and this cuts out a bunch of existing boilerplate code that exists today. And that causes a lot of friction in various WebAssembly workflows, bundlers, and code that can work across different JavaScript environments. This is the standard way in which one needs to do WebAssembly today. This the key part of the instantiation process. How to resolve the WebAssembly binary and this is a step that exists parallel to the normal JS module system. You have to fetch the binary, pipe it in and then separately do that and in this case, ESbuild has to be instantiated with a particular specifier for the run time. And the WebAssembly has an integration on its own and not all for dynamic WebAssembly instantiation. + +GB: So the benefits we get for WebAssembly are fixing this portability issue that we currently have in the WebAssembly ecosystem where you do the separate steps and bundlers don’t understand it easily. Getting access to WebAssembly.module and statically analyzing that is difficult. Users might get to wrong and not do it right. With the WASM integration, you want to multiple sending models between workers and various virtualization or specific cases. And so it fills a gap in that use case while solving the linking for WebAssembly and sort of providing a bunch of ergonomic benefits should significantly improve the portability of the workflows. + +GB: We get static error semantics, improve and security argument as well. That because – this is the syntax that ES build to update using the new import source phase syntax you get the WebAssembly through the JavaScript module system and can then directly instantiate with the existing WebAssembly instantiation where the ESB object is a WebAssembly.module and there is even potential security benefits that could apply either in browsers or in service side run times where because the code is no longer arbitrary WASM bytes being compiled, we have access to this through the module system, so the same types of security policies and reachability policies can also be employed to the WebAssembly code that’s executed. So it’s another level at which you can obtain a reference to source code that you can then dynamically link and opens up the virtualization policy + +GB: To explain the history of the path of where this proposal came from and how it got to this position. Initially, we are looking to solve this WebAssembly use case and earlier, import assertions were not supporting unique semantics. We were involving following this to treat this as inability to get a new behaviour in the module where you get the WebAssembly by object. As we walk down that road and started having the various discussions around it, we realized there’s a lot of benefits in seeing how this can unify with JS virtualization and some the work with module expressions at the time where there are similar concepts. Access to something to . . . and virtualization of modules. So WebAssembly is another module type virtualized through the JavaScript module system and out of that we had various discussions around the different modules proposals and we went off the deep end in terms of building this modules epic and the way the things interrelate on these virtualizations. And then import attribute ends up supporting changing semantics. But having gone through this journey, it was clear that these are actually – there is a clear representation of the module as going through these different phases that we can expose. + +GB: And so this became the phases simplification, we were able to see these a part of a separate proposal of different phases in which you can import a model, and this being a particular phase of loading it’s new Tim active in the module system and that’s the process. I will pass it over it leucoto go into more details on the phasing + +LCA: To recap the phases here. You can take a model in approximately 5 stages. The 5 you see on the screen. But generally, this is how it works. You start with resolve phase that makes a specify . . . and resolves these module requests or on the web URL resolution. And that may take [inaudible] maps. Fetch and compile this module. Through the network. From the disk or from elsewhere. On the web, network to fetch CPS (?) right here. This is also the part of the phasing that we are hooking into with source imports, exposing the result of this phase to user. The next phase is attaching evaluation context. This is when a module getting an identity, it gets linked to a specific realm, import metaobject, those things. Next, we go through the list of imports that the module also. Possibly load the modules and link them together. This is also a phase working on posting through the defer proposal that Nicolo presented yesterday And then finally, we evaluate and this is the phase that we currently always try to get when importing we are – we do all of the behaviors. + +LCA: Th semantics of this proposal are that source phase imports are exposed the module’s source object. This is a representation of the module source. This is [spo*is]d (?) via new module.GetModuleSource abstract method. The host load hook returns exactly one result per riverrer + (?) specify attributes. This is unchanged. Models that do not expose a source representation yet, will get back to this a minute, throw when they are loaded via a source phase import. All module source objects, the values provided from source phase imports inherent this the same be able to identify them. This abstract module source class is not exposed on the global scope. It is the all module source inherent from this class. In the future we might extend it to add common methods, for example, binding, this is part of the modules epic that we’re working towards in the modules call. We only have a toString tag implementation. An internal slot check like TypedArrays do. This allows for checking and the spec requires all module source objects from the class. The class is not constructible or callable. Users cannot create their own module source objects. Host objects can infer from abstract module source and host objects like webassembly module that inherent from can be constructed. But TA constructible right now. + +GB: The discussion as a new proposal came out of previous meeting, and the – in order to support this WebAssembly, we need to update the WebAssembly prototype and constructor for WebAssembly module to have this new inheritance chain. There is web observability to this change, because we are changing the inheritance there is precedent for the changes and we haven’t identified any use cases in which there would be exact type checks that would be invalidated in existing WebAssembly code. So we were able to work on the WebAssembly integration and integrate a PR into the proposal. And so there are a few PRs and so the update was also presented to the webassembly CG committee meeting on the 20th of June. And there were some really interesting questions that were asked about the proposals. But there was generally positive feedback and no major implementation concerns were raised. So that’s the current status. We were able to investigate the specifications changes and follow up on them in terms of how this can be integrated with the current WebAssembly as an integration proposal. + +GB: So this now puts us in a position in which the WebAssembly integration proposal should be ready for shipping. And the Champions recommend that these features can now be shipped together. So we can now ship both the ability to import WebAssembly in the JavaScript module system with normal imports, as well as these new source phase imports. By shipping both of these features combined we should cater to all of the use case that is are necessary to support WebAssembly on the web. If implementers do not want implement the full integration at first, source phase imports also provide a simpler initial implementation potentially. Because there is less heavy lifting involved in tying the module systems together. So we created a note that is a possibility. That we certainly recommend that both are implemented, if possible. + +GB: And obviously, we should consider what this representation is going to be for JavaScript. So at the moment, according to the spec, if you load JavaScript in the source phase it will provide an error. And obviously the intention is to follow-up and provide a source phase virtualization for JavaScript and that’s the next crucial part of the efforts we are working on . . . module declarations, compartments. And so we are committing to moving forward with that work. And when we ship this source phase, there is a kind of a sense that we are opening the door to starting to ship from these various proposals and that we should be able to follow up with the others. + +GB: Now, there is some risk with that because it’s a step – it’s a big step we are taking. But it’s a great opportunity for us to be able to start shipping these proposals, which we have been working on for I think coming on two years at the moment. And we have identified a huge amount of cross-cutting concerns and gone through the design discussions from the start. So that we have alignment between the proposals that we believe can come up with some really powerful primitives for JavaScript that should significantly improve virtualization cases. + +LCA: We discussed the proposal already. First up is the dynamic import syntax. In the meeting with he discussed whether to use the `import.source` syntax or regular with options bag. We agreed during the meeting it use the `import.source` syntax. And the proposal updated to use this. Next up the export source at that timeic (?) syntax. The topic of this came up in the last meeting. Import source, shorthand and exporting the I am – identifier. As identified by Ron, we believe there may be actual potentially use for reexporting. By allowing defining a module composed of multiple sources which is a single file with module expressions and proposals in the future. We will not pursuet this in the current proposal because the export default from form for regular evaluation imports does not exist. So we would like to see this explored within the context of the export default proposal. Finally, a follow-up on aligning the import source static and dynamic syntax. We discussed whether to align the static syntax with a space, the dynamic with a dot. There is currently no precedent for declarations containing dots. Additionally, the export source causes a space for the export syntax has to use a dot in this case. And didn't introduce a second declaration containing dots. And also introduced export dot something as a new syntax. Because of the points we decided against this change and we are sticking with import space source and I don’t think there was major controversy in the last meeting. Everybody seemed okay with that. + +LCA: On Stage 3 reviews. Stage 3 reviewer reviews are complete. From NRO, DE, and KKL. Thank you for those. And editor reviews are mostly done, some editorial fixes and a rebase onto import attributes. So thank you to SYG, KG and MF for those. + +LCA: One question came up, whether this phase that we are exposing should be called the source phase. We refer to this as the module source phase with source as the key word. There’s a question of whether users will think source means parsed or unparse source or parsed source. Right now the representation is parsed compiled source. Do you want to speak to this? + +GB: So I suppose the concern is that source typically refers to the source code. And the concept of a source phase is a new concept we are introducing. And whether that will be confused with the concept of source code. When it comes to the JS representation, the source phase can very much be thought of as a representation of the source code. And so far as source code can be considered optimizeible, perhaps the distinction for parsing isn’t as clear to end-users as it is to V8 enginers. So we may also for the JS source be exposing things like looking up the imports and exports and even having a getter to the source text. It very much would be that kind of representation. We did initially explore the module term for this. But the issue we have with the module term there is conflation on that term between WASM ecosystem in JavaScript. Module means the compiled module in the source phase. And so it’s – we probably don’t want to use that term and we haven’t heard any proposals for a better term. So that’s pretty much where we land on that at the moment. But we can certainly hear from SYG shortly. + +SYG: Yes, specifically the concern was that source sounds like what you actually already mean by asset, or at least in the future when we have asset. That’s what it means, it’s just like uninterpreted, just get the text kind of thing. When we bikeshedded the name internally, the two names that had some traction were “handle”, which I think is not good because that’s just like “handle” means anything, and “instantiable”, which I think has some promise. Specifically source, in this case, is about the thing that you can instantiate into a module, whereas assets are probably never going to be things that are instantiable. So if you import An instantiable, I guess the problem be the name is it doesn’t sound like a phase, it sounds like a thing, but it seems to reflect the intuition better, in our opinion. + +LCA: I have a couple comments on this. First on the asset, that source sounds like an asset. I think this also depends a bit on how you define an asset, which I think right now is not completely clear, asset could mean a blob of bytes or proposals for asset references are floated around, especially the beginning of this proposal, we were considering asset references as one of the directions to take this. So, yeah, I see the confusion, but I don’t think saying that it sounds like what assets actually are is correct either, because we don’t really know what assets are. I agree with “handle”. Handle could mean anything. Handle could also mean an asset reference. It does not even mean it has to be fetched. It could be “handle’ to the module request. “Instantiable” seems more reasonable. I agree on your point that it not really sounding like a phase, but rather like a thing. It also brings up the question what is the JavaScript representation, it is a module instantiable. Right now the proposed name is module source, which I think makes sense and would fit with the source keyword here, so I think that’s something else to take into account. + +SYG: Thanks for your response. I think the -- I think the instantiable name need not be the name of the concrete constructor of the module JS source. I think it makes perfect sense as, like, where it tops out, like, an abstract instantiable module sounds like a perfectly good super class module. But we’re not bikeshedding that yet. But it seems that -- that threads in needle for me for the source confusion at least. This is not a blocking point, but if there’s no violent reaction against instantiable, we prefer that over source. + +LCA: I’d like to hear what other folks have to say. Maybe this is just because I’ve been working on this for a while. I personally prefer source, but it’s not a super strong opinion. + +DE: More people had agreed with ‘source’ in the Matrix chat by the way. + +KG: Yeah, I just instinctively strongly dislike instantiable. It’s harder to say, it’s harder to spell, it’s harder to tell what it means. It’s not a word that you will ever have used before in your entire life. I agree source is a little confused, but instantiable seems worse to me. + +LCA: Okay, let me rephrase that question. Is there anybody that strongly dislikes the source name? Is there anybody that’s on -- that -- sorry, go ahead. + +SYG: Like, other than me, you mean? + +LCA: Yeah, other than you. + +NRO: I feel like the general reaction on Matrix has been that, like, some people don’t really much like source, but, like, it’s the best name of all the names that have been proposed. + +MLS: I’m not so fond of names having source, but I don’t like instantiable. + +CDA: Just a reminder to please use the queue for responding. DE is next. + +DE: I’m plus one for keeping the source name for the same logic NRO used. Should we do a temperature check here? The goal would be to capture potentially distributed discomfort with +“source” + +LCA: Okay. Sounds like some folks want a temperature check. So do you have time to set that up? Do you need time to set up the temperature check? + +CDA: I do not. We can go ahead and do it. Let’s just be very clear on this -- what the statement is for the temperature check. + +DE: So before we do the temperature check, let’s go through the replies and we’ll give a statement of what the temperature check is before it starts. + +CDA: I’m not sure we have replies on the topic. Nicolo, was your reply? + +NR: That was reply for the next topic. + +CDA: So we don’t have anything on the queue for replies. Is there any -- + +DE: I see. So I want to propose for the temperature check, like, do you feel good about using the name “source” -- the goal of the temperature check is to capture potentially distributed but not quite, you know, vetoing discomfort with the “source” term. If there is this very widespread discomfort, we should reconsider to another term. But if everybody’s sort of okay with it or strongly okay with it, then we should -- or most people are, then I think we should move forward with it. So if you put up the emojis again. + +SFC: The word source in the syntax. + +DE: Yes, so ‘strong positive’ if the name “source” seems awesome to you. ‘Unconvinceed’ if it seems pretty strange to you, ‘confused’ if you’re confused by it. Does that make sense? + +CDA: All right, maybe give it till, I don’t know, the 39 minute mark, which is another 30 seconds or so. We have only a little over 20 responses so far. And we have quite a few more than that in the room and participating, I think. I’d like to see some more responses, if possible. All right, we’re going to screenshot the temperature check. + +> 10 strong positive, 4 positive, 2 following, 5 confused, 2 indifferent, 0 unconvinced + +DE: So a lot of people answered confused. Including people who haven’t spoken yet in this discussion. Could people maybe elaborate on your thoughts there. + +CM: Do you want to do the queue or just speak up? + +LCA: Just speak up. + +CM: Yeah, I mean, confused is literally what it is. I’m kind of indifferent in the sense of, well, it’s okay, I guess, but I prefer a different term. But if you can’t come up with anything better, go for it. + +DE: Chris, you also put that you’re confused. What do you think? + +CDA: I don’t know if I have strong thoughts on this. I don’t know if confused is the right word. I just -- it’s more of a -- it’s just the overloading of the term, I think, is what I struggle with. That’s it. + +DE: Okay, and Hax, do you have any thoughts here? + +Hax: I think I have similar feelings. Previously when I introduced this proposal to others, people always ask if that just give you a string, but I think it’s not -- it’s not a blocker, so if we can’t find a better name, I think people will learn to love the term, yeah. + +DE: Okay, so overall from the confused voters, what do you recommend we do next on this proposal? Thanks for giving your recommendation, Hax. And, yeah, feel free to yell out. + +CM: What I say, if you’re inspired, if somebody comes up with a great suggestion, and keep your eyes out -...My -- my sense is keep your eyes open for a better term actively. If you can’t come up with one, so it goes. + +DE: Thank you. Do you have thoughts on what the next steps should be? + +MLS: Well, slide 9 says it’s the best compiled stage of when we’re doing this, so this is I don’t think is right one. But “compile”? + +SYG: But it’s like not maybe compiled. Sorry, I should get on the queue, but I have a concrete suggestion for next steps. + +DE: You’re confused about our also. + +SYG: I think let’s open an issue for inspired suggestions, set a hard deadline that’s reasonable and, like, T plus, I don’t know, six days, five days something like that from the end of this plenary. And if there’s nothing that is, like -- that not everyone can live with, especially the champions, we keep “source”. And consider that, like, the consider that as a condition of Stage 3. + +LCA: Yeah, that seems very reasonable. We can do that. + +CDA: Okay. We have a little under 10 minutes left. If it’s all right, we can proceed with some other topics in the queue. Does that seem okay? + +LCA: Yeah. + +CDA: All right. We had EAO. + +EAO: How can we test this without a concrete JavaScript module? + +NRO: Import attributes have a similar problem with testing them is difficult because there are this layer between 262 and, like, whatever wraps 262, and, like, specifically for import attributes, the only way to test the specific behavior is to we require whatever tool you used to test262 -- like, 262-valid protectable to see how we acquire the global dollar (!?) to fix to variable. And I think the only way to fix this proposal would be to require this run or the provide, like, a stub module source we can use in some test. + +EAO: I’m happy to accept an assertion that this is testable without a concrete JS ModuleSource. + +LCA: Yeah, that it’s testable if we add some assertions to test262 that the host must follow certain protocols. Okay. The queue is empty. So we’d like to ask for Stage 3. And this is conditional on opening an issue for naming suggestions and I guess we’ll set the deadline to next Friday. That would be one week after the next meeting -- or after the end of the meeting. Sorry, guys, that should be rephrased to conditional Stage 3 pending that we open an issue that is collecting feedback until next Friday for naming suggestions and unless we find a better name in that issue by next Friday, we will proceed with the current “source” name. + +CDA: Nicolo? + +NRO: Yes, so, like, if in this week we find a better name and the keyword changes, is there, like, Stage 3 problematic or is it only problematic if the proposal remains as it is? + +LCA: Yeah, I don’t really know what the answer is? I would expect –I expect we want to ratify this at the next meeting if we have a new name. + +SYG: A conditional Stage 3 is that once the condition is met, we’re not -- sorry, let me rephrase the condition I propose. That the condition I propose is that basically this will unconditionally reach Stage 3 after a week. There might be a change -- like, it’s not Stage 3 until at least a week, and in that week, a new name could arise and it changes. But whatever it changes to, that becomes the Stage 3 proposal after the issue closes. Because we have a default. It’s not that we don’t have a name. We have a default name, which is “source.” Which nobody is objecting to, including myself. Even though I’m unhappy with it. It’s not -- I guess that’s not quite like a condition -- that the condition is just like a -- like an extra period of time. + +DE: Yeah, so I’m next on the queue. I think what you’re saying makes sense, so maybe it was my initial intuition, but honestly, if we come up with a different name, we should probably run it by the whole committee for review before settling on it, because we may find -- people may have opinions or we may find problems with it that are not people who are among the thread respondents. On the other hand, if we keep the same name, I think we can go to Stage 3 without returning to plenary. Overall, we’ve been discussing cases where conditionality might not make sense. There’s been a couple times recently where things were proposed for a conditional stage advancement where we rejected that as a committee because the conditions were kind of too complicated. I think this one passes the kind of test for being simple and scoped enough that it makes sense. + +CDA: Okay, we have a clarifying question from Jordan. + +JHD: Yeah, I just was like you. You mentioned that through this process, y’all have come up with the idea of phases. So what are the phases? Like, presumably it’s like a list of words of which “source” is one of them. Thank you if there’s a slide that I missed. So you’re calling source as the fetch compile? + +LCA: Yes, exactly. Source is the fetch compile phase. I can go through them again. This one is the asset import. This one is the source phase import. This one is the instance import. This is the thing that would be returned from a module expression or module declaration. The link phase is defer. And eval phase has no name. + +JHD: Okay. Thank you. It would be great to link this slide or quote it in the issue as well. + +LCA: Sure. + +CDA: Yeah. Yeah, I agree. I’m next in the queue and I think it’s the same as SFC’s item, which I moved up in the queue, which is agreeing with DE that it strikes me as odd that we would conditional approve the name change when we don’t know the name. I think that’s something that needs to be brought back to committee. But I think it’s fair to treat the proposal as Stage 3 still sort of conditionally. SFC, did you want to add anything to that? + +SFC: Not really, except I think it’s a process. + +CDA: Okay. + +SFC: It’s more like I think I like the word “source,” but if there were another word, I think it would be good to, you know, review that. Like, if someone comes up with the word “compiled” as the other proposal, I think we should discuss that term. + +CDA: Okay. SYG is in the queue, did you want to speak on -- next Friday sounding good? + +SYG: No, that was just a quick response to them. + +CDA: All right, so consensus for the conditional Stage 3? I think you had some words of support already, and then we’ve got CM plus one for Stage 3 from Agoric. I am also supporting Stage 3. JWK supports for Stage 3 with or without renaming. Anybody else want to chime in on support or objection before we move on? EAO, plus 1 for Stage 3. And NRO, plus 1 for Stage 3 as well. All right, we have Stage 3 for source phase imports pending the discussion on the naming. + +LCA: Thank you. + +### Summary + +The significant follow ups from last meeting: + +- Dynamic import syntax has switched to `import.source()` +- `export source default from` will not be tackled as part of this proposal, to be investigated in `export default from` proposal. +- The static `import source` syntax will not be aligned with the dynamic `import.source` syntax. + +There is discussion about the name of the phase keyword (currently source). SYG brought up that some folks were confused by the source keyword, expecting it to return unparsed, uncompiled source code. + +To resolve this, we have opened an issue https://github.com/tc39/proposal-source-phase-imports/issues/53 to bikeshed alternative names. This issue is open until Friday, 23-07-2023. If no better keyword is found until then, we stick with `source`. Otherwise we’ll come back to the next meeting with the new name. + +Stage 3 has been reached conditionally on the above name bikeshed concluding. + +### Conclusion + +Stage 3 has been reached conditionally on the above name bikeshed concluding. + +## Set methods: deferring callability check / handling negative sizes + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/proposal-set-methods) +- no slides presented + - https://github.com/tc39/proposal-set-methods/issues/98 + - https://github.com/tc39/proposal-set-methods/issues/84 + +KG: Okay. So I have a couple of small PRs for set methods, which is at Stage 3, so these are just extremely minor tweaks. The first one is this callability check for the next method, so you may recall at the last meeting, there was a tweak to the iterator helpers proposal, also at Stage 3, to remove this eager callability check for the next method that you looked up on the result of calling simple.iterator. The idea was that you are just sort of assuming that the thing that it gives you is well formed and you will get an error if you ever actually go the call the next method. But not if, like, you don’t actually end up consuming the iterator. The idea is we’re not trying to be super eager about validating everything up front. + +KG: So in iterator helpers we removed this callability check on `next` on the assumption that it would just fail when you actually tried to call it, and that was okay. There is a similar eager callability check on a next method in the set methods proposal, which I filed here. It’s not quite identical because we are getting it out of the keys method rather than the symbol.iterator method. But it is quite similar. So SYG points out that for consistency, we should probably remove this callability check. So the first thing I would like to ask for is consensus on removing this line. So with nothing on the queue, I’m hearing no objections, I see I have explicit support from SYG, I will take that as consensus. + +KG: The second thing is that there are other callability checks, in particular, for the `has` and `keys` methods that you look up on a set. And MF suggested perhaps we should drop these callability checks. I am not inclined to do so because I think these are a useful way to get errors that you kind of want to get if you pass something of the wrong type as an argument to `Set.prototype.union` or whatever. Without these checks, if you pass something which happens to have a numeric size property, it will pass, and when the algorithm will run on it and maybe will you get an error and maybe you won’t. In particular, if you’re like doing an intersection where the receiver is the empty set, you actually won’t get an error. I think that’s a bad experience, so I’m inclined to keep these callability checks, seeing as they are on string keyed properties of an arbitrary object instead of something that’s specifically supposed to vend an iterator. So unless someone feels very strongly that we should drop these callability checks as well, I will keep them. But I did want to raise the issue. I see SYG is on the queue. + +SYG: Yeah, let’s keep those. I think there’s -- I think there’s less of a -- like, from the merits argument to remove the other ones, other than consistency with what we decided, and, yeah, there’s no reason to really remove these. + +KG: Okay. JHD? + +JHD: Yeah, same thing. I think we should be, in general, checking and eagerly throwing anything as early as possible wherever we can, except for the places where we must not for consistency or where it doesn’t make sense, so I think we should keep these. + +KG: Sounds good. Okay, I will keep those. So that’s my first of the two normative issues with set methods. I guess I’ll just go ahead and move on to the other one. + +KG: So this one was raised actually a while ago, and I just completely failed to address it, so I’m bringing it back now. I’d like to highlight this line for you, which in the -- as you may recall, when you pass an object as an argument to `Set.prototype.union` or whatever, we look up the size, has, keys properties on it, and as just discussed for as has and key, we enter they are callable and for size, we enter not nan, and then ToInteger it, but we had an assert -- or not an assert, but a type description that said that the size was a non-negative integer. The algorithm didn’t actually enforce that, so the spec was incoherent on this point for the treatment of negative integers passed as the size property of an argument to these methods. So there’s the question of how should we handle negative integers. There’s basically two options. We can clamp it to zero or we can throw a range error. I originally figured we should clamp it to for consistency, but since then I have thought about whether we would be well served as coercing things in general, as we will discuss later in the meeting, I think we are basically not. If you pass something with a negative size property, that’s just incoherent, and I think an error is more useful. So I am hoping that the committee is amenable to inserting a check after this line -- I guess after this line 6 here that says if the in size is negative, throw a range error exception. And then this description of the type would become correct. + +SYG: You said the clamping would be consistent. What is it consistent with? + +KG: You know, for some reason, I thought it was consistent with something, but now I no longer think that. The array constructor throws if you pass it a negative value. `Array.prototype.at`, +`slice`, et cetera, treat negative values as indexing from the end. Yeah, I don’t think that’s strong consistency argument here for -- + +SYG: Yeah, I think indexes are categorically different than sizes. Yeah, cool. + +RGN: Yeah, so rejecting negative numbers while maintaining two integer or infinity is going to result in, I think, the weird case where a negative fraction is accepted. + +KG: In this one there definitely is a consistency argument. All of the methods in the language that take integer arguments, with two exceptions (three with temporal, but two prior to temporal) round rather than throwing if you give them non-integral values. So if you do `Array.prototype.at(1.5)`, then it’s actually 1. That is also something that I will discuss in the coercing talk later in the meeting. But you’re right, that’s kind of weird, rounding in general -- or truncating in general, extremely weird. I would be open to throwing if it is non-integral, having this be one of the very few places in the language that does that. I think we probably should do that for new methods going forward. + +RGN: Okay, I think I also would support that, but mostly I was just looking for clarification here that the current intent is to keep the truncation of two integer or infinity even with the introduction of rejecting negative values. + +KG: Yes, that’s right. That was the intention, and I’m not asking for anything different right now. Depending on how the conversation later in the meeting goes, I may come back and suggest tweaks to this and other Stage 3 proposals where we can get away with it. + +RGN: Great. Thanks. + +KG: Okay, so concretely, inserting a range error for negative values between steps 6 and 7 here in my proposal. Sounds like people are in favor, unless anyone is opposed, I will take that as consensus. Great. I will get those PRs landed. Thanks very much. + +### Summary + +The committee was in favor of removing the callability check for the `next` method for consistency with the rest of the language, but keeping the callability checks for `has` and `keys` methods since there is less of a consistency argument and it is a better user experience. + +The committee was also in favor of throwing a range error for negative sizes with some discussion of the treatment of fractional arguments to integer-taking methods to happen as part of the later presentation on coercion in general. + +## Decimal: Open-ended discussion + +Presenter: Jesse Alama (JMN) + +- [proposal](https://github.com/tc39/proposal-decimal) +- [slides](https://docs.google.com/presentation/d/19MaO7On6knlweYZUei-d5VyqANzkKeR8QmC13KATvgs/) + +JMN: Yeah, so thanks for coming, everyone. My name is JMN, I work at Igalia doing some of this work about decimals in partnership with Bloomberg. This talk will be open ended discussion, but it has elements of a stage 1 update, Decimal is a stage 1 proposal, but early. + +JMN: I had initially intended to present this back in May, but then I got sick. The context is Decimal has been at Stage 1 for a while. The last update was March 2023. We had a great discussion, but then the queue was full, no time. I wanted to continue the discussion in May but then I got sick, so here we are again. Let’s get your math engines fired up. + +JMN: So the outstanding discussion items I tried to remember as best I could from the queue from March, with a little bit of loaded prompting here, one of the questions is should we do decimals at all? What should this feature look like, and there are other topics like why not rational numbers or other representations of rational numbers? And then there was also a question in March of whether the problem might be too specialized and whether this should just be a library. + +JMN: Just to give you a brief sketch of my thinking before we get into some of these things. I’ll flesh out some of these points later if you haven’t heard about this stuff. + +JMN: Just to lay the groundwork here, so decimal is not some kind of frozen thing. There is a proposal or a plan forward about this stuff. The data model that I had in mind, that I’d like to discuss here with you all today, is decimal 128, so this is a standard for decimal floating point arithmetic using 128 bits. The idea is to use normalized values. So in this world, 1.2 and 1.20 are exactly the same thing. Not distinct values. In official IEEE 754, those are actually distinct, though they compare equal, but here they would be equal. The proposal is to add some new syntax there. You see that little `m` suffix, the idea would be that that’s a new decimal 128 value. There would be a new decimal class with some static methods for calculations. All the confusing parts of floating points just die. They go away. So plus and minus infinity don’t exist in this understanding, NaN doesn’t exist. There’s no negative zero -- well, it’s kind of a consequence of our commitment to normalization. There’s no mixing with the methods in the `Math` class, so just throw whenever you’re given decimal128 in arguments. There’s no operator overloading. Only basic arithmetic is supported and there’s just one rounding mode. So the idea here is to try to keep things simple to try to keep things fast, I think all of us can agree that we want to keep our JS engines fast. And the idea here is to also satisfy the needs of developers and those who want decimal numbers, so there’s a lot of use cases being addressed here. All these things are of course, up for discussion. It’s also part of the point of this discussion here today. + +JMN: Just some kind of sample code about how this would go. So I assume that some of you have been out to restaurants here. And you look at your receipt, doesn’t have to be here of course. Know when you buy items, they typically have some kind of tax on them so you have a list of items that you buy. There’s a count, you know, say you got two of this and five of that or whatever. And there’s some kind of tax applied. If you want to do some calculation here to calculate a bill, you can see that there’s some decimals scattered throughout here. You see there’s that `1m` there when we add some kind of tax, which the find down at the bottom. That’s what it would look like. That’s just a very simple example. You can talk about all sorts of examples. + +JMN: This for instance, another great way that decimals would help us would be with some kind of relational database or non-relational database. You imagine plugging into Postgres and specify that decimal columns, something that SQL has had for ages and which are, of course, real, exact decimals get down correctly. You would do your query and you get some kind of JS decimal values back. + +JMN: There’s a playground that has been developed for some of this stuff. It’s somewhat behind the most up-to-date thinking that I am talking about here today. But nonetheless, I think it’s okay to mention that this thing exists. You can run this and have some fun with it. There may be some bugs there, please report them on GitHub. I’ll take a look. + +JMN: There’s a new npm package out there. I had some fun trying to do decimal128 in userland JavaScript. Decimal128 is available there. So that actually reflects the most up-to-date thinking about decimal128 and how it might look and that small. That’s just a module, not a polyfill, so the syntax that I talked about a couple slides ago is not present there. That’s for playing around with what decimal128 would look like in JS. + +JMN: So now we tried to get to some of the motivating factors here. The primary domain are business and finance. I think that should be a bit of a no-brainer. Think about all of the things that you deal with on a day to day basis where you yourself have to read and write numbers. A lot of that has to do with money, right? Data exchange, so this is about JS engines sitting between two systems, say a relational database and the browser or between two other systems. And then another class of use cases come from science, engineering, mathematics. The difficulty is that with binary floats, the kind of flows that we know and love or not in JavaScript, can be incorrect in ways that really matter in these contexts. + +JMN: So in business and finance, you can get sued if you get a calculation wrong. For instance, I live in Germany and German banks can sometimes get sued if you get something wrong, even by one cent in, say, some kind of official bank printout or bank statement. Data exchange, of course we all want to get that correct. In science, engineering and math, the same thing, right? The status quo is that decimal numbers need to be handled with strings. If you want to do exact computations with decimal numbers, you have to treat them as a string. And you have to do some kind of digit by digit arithmetic on these things. + +JMN: Or you either don’t know or don’t care, and you work with binary floats and data loss just happens. or things are just wrong, and you don’t know why. There are various userland libraries out there with varying APIs. But you know what, this is just me pounding the table. Let’s skip that and try to look at some data. + +JMN: Igalia has been running a survey to solicit JS developers to get their views about decimal numbers and floating point numbers. You can fill it out too, so if you go to the slides, can link is down and the thing is not closed, you can fill it out. We have received 73 responses so far and I want to give you an appreciation of some of these answers. So here is a question that we asked, are there any places in your code where you’re afraid of a rounding error? I talked about normalization earlier, so let’s take an unnormalized version here. You see lots of yeses. This is maybe not so good because of course the purpose of this survey is to get the opinions of people who care about these kinds of things, so of course they’re going to say yes. But to be more serious here, everywhere we deal with money, which is all over the place, yes, there are many. Every time we show a price or discount to the user, which is constantly. I saw a “no” in the survey data, which is surprising, but also quite uncommon, as you might have guessed. + +JMN: What kinds of applications in your organization are teal dealing with numbers? Here we have people doing point of sale, purchasing, calculating shipping weights, e-commerce, you can see that there’s a kind of domain that’s developing here. Back-office processing and reporting. One person said we work a lot with currencies of products and invoices, et cetera, and sometimes notice rounding errors. “I work in a digital transformation lab for investment services, so pretty much all of our applications are working with these kinds of numbers.” Data visualization. Someone said that they have some kind of numerical input with the step up, step down interface. Numerical projections of actuarial projections and calculations. Exchange, precision and structural calculation it’s a bit of a mixed thing, but okay. I like this one, “we are building a bank”. I thought that was an incredible use case. + +JMN: So how do you process decimals? Some interesting stuff here. We have to show up to two des malls in the applications and we might calculate up to more on the server, present a rounded value, and then in the front end, have to add and round again. That sounds uncomfortable. In our systems, we calculate pricing, some kind of metered usage for providers of these cloud services, we have to input the cost in micro cents, so that’s very fine grind. When we display to user, we I did display in a form they’re familiar with and calculate in JS and display it. We transfer the amount entry and display, so it sounds like some kind of banking or money thing, VAT, that’s European value added tax computation. Sums of selected transactions, kind of like this sample code I gave you earlier. Here is another one, to avoid round trips on the server, we recalculate many money values on the client on JS to preview what the value will be on saving them as well as immediately validating some values. User type in items weights and ounces, fractional line item quantities, item costs and prices. So you can see there’s a nice mix of front end and back end applications here. + +JMN: So what kinds of calculations do you do? Getting it wrong is fines and refunds to clients, so that’s something we’re incredibly cautious about. It’s nice to be able to show something fast in the browser. So here one, it’s almost always summation, and it’s on the client and the server. We use a decimal library on the client and its error prone. That sounds bad. Here is someone who says addition and subtraction, both client and the server side and the testimony data. So far pretty good, but we have to provide work arounds when we encounter the usual JS decimal representation. Pitfalls? Arithmetic calculations are very common. This done on both the client and the server. This trick-or-treatly comes up in calculating arbitrary percentage of a money value. So we work in manufacturing and almost all of those come up in some fashion or another calculating distances, money, we’ve got do it all we’ve got to do it using decimals. + +JMN: So that’s actually just a tiny snippet of this data. It’s really fantastic. There’s a ton of juicy quotes that I omitted, but I hope you start to get the point just from that sample. You can see that many developers need decimals for accuracy, and most of the cases involve money. They also need to be fast. They generally need to be fast to support some kind of live user feedback. And we can see that need goes beyond merely displaying decimal numbers, so it’s not just enough to treat these as a string and present the string. More over, in the data which we just suggest that pretty much everyone would be satisfied with basic arithmetic. There’s a minority that wants exponential function, log or ?, but that’s a small minority. + +JMN: So the use cases are finance and business data exchange and science engineering. Let’s just dig into those a little bit more. So money. So handing money exactly is the clearest motivation. Binary flows just won’t do in many cases. What are some calculations that come up in finance? Well, things like adding together two items. That’s addition, that’s simple. Removing an item from a total is another one where you need exactness, that’s subtraction. Multiplying a cost by tax rate. We saw that before. Multiplication. How about getting an average? Well, that’s addition and division. Dividing an interval into equal or -- is equal to possible parts. That’s kind of remainder operation. Currency conversion. That’s also multiplication and division. + +JMN: Data exchange. This is one where JS engines, whether front end or back end, say a browser consuming JSON input or some kind of nodes server are surrounded by systems that natively support decimals like relational databases or other systems written in languages that natively support decimals. There’s a need in that use case for consuming these of course. Possibly doing some light computations or (inaudible) and then passing them on or just displaying the results. + +JMN: There’s a whole class of use cases here, science, engineering and mathematics, so working with dimensions, so things like, you know, fees, cubic meters and so on and converting between them. So here you might need exponential log or ITm, possibly trigonometric functionsish maybe not but this comes up a little bit infrequently, as I mentioned earlier. + +JMN: There’s a number of interesting questions about how to represent these things. So even if you buy the idea that decimal numbers are something that you may want, there are a number of different representations, concrete representations that could be chosen to represent them. And one of the discussion points that we had back in March is a what about rational numbers? And the thinking there is I think quite understandable, in the mathematical sense, rations are of course strictly more expressive than decimals in the did since that every decimal, as we normally understand it, is a rational number, obviously, right? It’s some integer over the power of 10, right? And then -- so all use cases for decimals, again, following this kind of math mall Cal thinking, could be handled by rations, and even more, there are some use case involving, say, images and video where one has to work with rational numbers. Think of the aspect ratios in television, prince. And then here is one, so if we work with exact decimal numbers, then we have this kind of classical, it’s almost a meme among number nerds. 0.1 plus 0.2 is equal 0.3. But if you do things like 1 divided by 3 and multiply that by 3, I guess that’s not going to be 1, and the decimal 128 universe. Why would that be okay? Why don’t you use rational numbers, right? + +JMN: There are some good reasons not to use rations. If I could mention this very last point, I think that’s actually the most important. Up with is their just a separate data type. They could be an added JavaScript independently of the decimal numbers, and some languages indeed have both. But then to add even more to this argument, one issue is that normalization is an issue. Normalization means trying to find out the greatest common deviser and repeatedly doing that until you have some kind of normal form rational number. There might be some kind of good normalization strategies out there, but I feel like we’re starting to go down a rabbit hole when we explore that kind of issue. Rendering a rational number, you know, like 1 over 3 as a decimal string involves -- or it may involve quite a lot of computation. This is the thing that we all learned in probably elementary school, you know, be long division, to generate the digits of some kind of quotient while we have to really just do it digit by digit. Probably the worst thing here, in my view, is that it’s calculations -- as calculations get more complex, the numerators and denominators get bigger and bigger very, very quickly, exponentially quickly. And to convince yourself of that, just write down what it is to say A over B plus C over D, uh-oh, you’re starting to multiply numerators and denominators, and you do that again, now you multiplied the multiresult of the multiplication. If you have a computation that involves three steps, now these things are getting huge. And if you have some kind of normalization where you reduce everything to a minimal form, then you’re generating the big numbers only to shrink them down again. So it’s not a very light weight things. Especially when in front of us there’s a very simple alternative and some kind of decimal number, right? And as I mentioned, rations are just a separate data type. So although I, as a math guy, I’m very attracted to the idea that rational numbers are just of course some kind of super set of decimals, you pay a price for that expressiveness. Next slide, please. So, again, just to repeat this from earlier, we’re currently leaning toward, again, it’s not a final decision, but leaning toward decimal 128. Just a bit more details about that. Decimal 128 is a standard, I didn’t make what up. It’s IEE 754 since 2008. Rereflects a rot of research or various representations of decimal numbers. Here as the name suggests, values take up 128 bits. There might be some optimizations possible in some cases, but just naively. + +JMN: You can think 128 bits is my default approach there. What can you represent there? You can represent up to 34 significant digits, and the exponent or the power of 10 here, the kind of way you shift the decimal point left or right can vary by about 6,000. And just to convince yourself of whether 34 is needed or whether that’s enough, just ask yourself how many times you have yourself used a number that uses even something close to that many decimal places. I mean, we can talk about LL and S bank account, but even that uses, what, nine decimal places, nine significant digits, so we’re talking about 34 here. We can represent all sort of things using 34 significant digits. I challenge anyone to find me a real use case where they have 20 significant digits. We’re talking about human readable and human writable numbers here, so that many digits is probably going to be hard to find. The nice thing about the decimal 128 approach is that it’s fast. Memory requirements are easy to reason about even in the face of complex calculations. That’s maybe one of the down sides of more liberal approaches like some kind of big decimal approach where you have essentially unlimited precision. The difficulty there is that analogous to rational numbers, the number of digits you need grows quickly. That’s relatively straightforward to implement, especially considering that we have only a limited repertoire of functions in mind here, addition, subtraction, so on. Libraries exist. There’s one by Bloomberg, another one by IBM. And depending on your setting, your compiler, your C compiler might support this out of the box. There’s work on trying to add this to C and C++ standards. Decimal 128 is not a silver bullet. There will be some weirdness. So, yes, in this setting, 0.1 plus 0.2 really is exactly 0.3, but unfortunately, 1 divided by 3 times 3 isn’t 1. It’s just 34 nines. And, well, this is one of the things where you have to say this is some progress. This gives us what we probably want in many cases, but it’s not a silver bullet. If you really need that to be exactly one, then you would need something like rational numbers. By the way, other decimal representations also suffer from this problem. Other things like big decimal or some kind of fixed number of digits or fixed number of decimal point digits are also going to suffer from this problem. Somehow inherent in working with decimal numbers here. Next slide, please. There is another issue that has come up since the March discussion, which I would love to get some input about, which is rounding. There are a few different ways to round numbers out there. What’s interesting is some languages pick one. It’s usually this kind of bankers rounding or rounding ties up to nearest even number. C#, WebAssembly. Actually, the methods in JS number class also do this. And then that’s it, there’s no options to specify a rounding mode. There’s just this one thing available to you. Whereas other languages support multiple rounding modes. You can say things like round to tie to even or round up or round down. These kinds of things. And this is just an open question here. What kind of rounding options should we support? And if we support multiple options, should there be a default, and if so, what should that be called? Next slide. So that’s about it. So that was a kind of stage 1 update about what I’ve been up to since March. Recap itlation of some of the main points about the decimal project, the motivations for it. The use cases. But there are some interesting questions, and I guess we do have some time. That’s why we’re here today. I hope there are some of you out in the audience who are interested in this stuff. The question is do we really want to do some kind of decimal built-in type. And you are okay with this sketch that I made here of having a new decimal class with some static methods for calculation. + +WH: I’m confused about whether this is using IEEE decimal128 or not. You say you’re using decimal128, but at the same time, the results are different from what decimal128 would produce because you took out minus zero, infinities and NaN. So this seems incompatible with the way that math has been standardized. I don’t understand why we’d want to diverge and sign up for a huge amount of work to develop our own math standard library rather than using existing ones. + +JMN: Right, yeah. That’s a good point. I guess this is not literally an implementation of decimal 128, but some kind of simplified version of it. The idea was that the plus and minus infinity and the not a number could be done. We could export them after all JavaScript itself has these things. I think one issue that I found is that many developers find this confusing to work with comparing the minus infinity to other numbers, and always having to worry about the possibility that the results might not be a number. So this is, I guess, a chance to clean up some of that that are included in the first place, just to say that this probably doesn’t match the needs of many of the use cases that we deal with. + +WH: Yeah, this would be an enormous mistake. This is not cleaning up. This is introducing our own alternate standard for decimal which is different from what IEEE specified. Furthermore, if we did this, there’d be no way back, because basic arithmetic operations would produce different results from what IEEE decimal does. So the result would be slower, incompatible, and would prevent the use of other IEEE functions such as power or exponentials. + +JMN: Yeah, that’s right. Yeah, I mean, we are, I guess, open to the possibility of including those things. A things like plus and minus infinity and not a number. That could be done. That could be added to what we’re doing here. If I think about the case of data exchange, I wonder, though, if passing along not a number helps or if people really care. If I’m a consumer of data coming from a JS engine, and I get, you know, not a number from some kind of calculation, you know, what do I do? + +WH: You most likely got it because you divided zero by zero. + +JMN: Right. + +WH: Or you took the square root of -1. We should not be inventing our own math library. We should be using one of the existing ones. This is my first point. + +WH: My second point is that you seem to have coercions in there. I took a look at some of the code on the slides and the coercions are really weird, with coercions going between Decimals and Numbers, but not all Numbers. The example code uses Numbers for item counts and wouldn’t work without those coercions. I’m surprised to see no mention of that in the presentation. + +JMN: You mean that the code prefers not to do coercions? + +WH: No, the code relies on coercions. + +DE: I can jump in as a co-champion here. I think this was simply an error on the slide about using -- forgetting the use the M suffix for some cases. I think the code on the slide should throw an error, just like it would for biggens, so I agree with WH’s point. + +DE: So just to address the previous point, this is decimal 128. I mean, you are -- IEEE arithmetic famously has all these different modes. When some of these modes are about throwing exceptions when you would reach infinities or NaN. Maybe negative zero is something to consider. But I completely agree that this is -- this has caused problems in the past that I don’t want to introduce here. I disagree with the notion of adding operations like square root or log to decimal. I think we should be explicitly saying that these are, like, anti-goals, and I’m quite confused about why they should be considered goals. So, yeah, you have also asked for normalization. + +WH: I have not asked for normalization. Who asked for normalization? + +DE: I believe you did, and everybody else agreed. + +WH: I did not. + +DE: So you want decimals to have trailing zeros? + +WH: I do not want significant trailing zeroes. But that doesn’t require normalization of decimal values — IEEE cohort members are indistinguishable if we don’t include any operations that can distinguish them, which this proposal doesn’t. + +JMN: Would it help perhaps to choose a different name, then? Something like inspired by decimal 128? I mean, we don’t claim to be literally decimal 128. + +DE: Sorry, I don’t think that would address the point. The point is -- + +WH: No, the problem is that we’re inventing a new standard when there’s a perfectly adequate existing one. We should not be in the business of inventing math standards. + +DE: The intention is to not invent a new standard. I think we -- + +WH: Okay. And my third point is that this has object identity. Which means that for every decimal calculation, you can ask the question of is the thing that your function produces a new decimal number or is it reusing an existing decimal number? Like, if you take the max of two decimal values, do you get a new decimal number or do you get an existing decimal number? This will be a giant foot gun. I guess people may rely on decimal values being `===` in some contexts. + +DE: I can answer that. All operations would produce new decimal numbers, max in particular. I think we can think about it, this proposal doesn’t -- I don’t think it includes a max operation. The claim that it’s a foot gun, I’m somewhat sympathetic to that. You know, the previous version of this proposal did include operator both for arithmetic operations as well as for comparison operations. This version does not. And that’s largely based on feedback from potential implementers who have told us that they don’t want to do operator overloading again. They don’t want to repeat -- some of them don’t want to repeat what happened with BigInt, of having a bigint operating overloading. So this proposal is trying to be conservative and minimal in omitting those. But I think this is something that we’re open to reconsider. + +WH: And, yet, you have a new syntax for literals, which is — + +DE: For literals, that’s -- it’s just a lot lighter weight to add a new syntax, and I have the -- the extensible numerics literals, which we could pick up again. This doesn’t cause the operational overloading issues -- + +WH: It makes it very attractive to ask if a decimal is === to a literal. + +DE: Users may try and compare array literals as that works in other languages. + +DE: We could also compare with what was raised with IEEE. I’m ultimately -- ultimately, I’m not disagreeing with you. I see that there are significant ergonomic benefits to supporting operator overloading. + +WH: Anyway, I have major reservations about some of these decisions here. Let’s go on down the queue. + +MLS: This proposal is a non-starter for me if this isn’t using a standard format, (e.g. IEEE 754 Decimal128). + +SFC: Yeah, I’ve raised multiple past meetings that trailing zeros are very important for internationalization perspective, so I just wanted to raise that again. + +DE: Okay. I just want to mention, when talking about the context of decimals, in a presentation that API and I gave a few years ago, we outlined how almost all other programming languages and database systems include some notion of decimal, and I think part of what we can notice from that experience is that programmers don’t actually complain about the details of decimal semantics. IEEE is not supported in many systems. And I don’t think we should be inventing our own decimal semantics. My takeaway is that we have a lot of flexibility as to the choices that we can make because there’s just a lack of anybody complaining about the inconsistency here, somehow. And strong support in the ecosystem demonstrates that it’s a widely shared need when this isn’t the easiest thing to add to a system. Oh, my next topic, rationale against rationals. The other thing about this is that rationals relate to -- sorry, decimals relate to, as wuss mentioned, formatting. It only makes sense to do the operation of printout in decimal form, if it’s the range of things is restricted to decimals. But further, decimals have this operation of rounding, which comes up all the time in financial applications. It’s just an inherent operation that you do when formatting and calculating that comes up frequently, and just doesn’t quite make logical sense on rationals. So that’s why I think they’re distinct data types. Even if we left aside the efficiency of time and space that rationals have more overhead for. + +EAO: So given that this is not considering operator overloading, I’m really struggling to see why does this need to get baked into the language rather than just being a really good library for the users who need this sort of functionality? The motivation that I’ve -- that’s been presented here previously, at least to me, doesn’t really tell the story of why does it need to be in the language rather than just being a library? + +JMN: Yeah, you’re right. I think overall, this certainly could be, and of course it exists as a library, or even multiple libraries out there that do these kinds of things. Certainly speed is a big factor there. In the survey data, we saw some, you know, people saying that they need to have some computations in the browser, so this needs to be very fast. For data exchange, you might imagine that this needs to be quite fast, especially if we’re talking about, you know, financial applications. + +DE: I disagree that performance is a main concern here. I think the bigger concern is, as was mentioned previously: interchange. We want to have a standard way to pass around decimal values both between different components within JavaScript, as well as between programs that are on different servers. This is why Bloomberg is working on this proposal. Because we use decimal all over the place to represent money quantities. It does risk having bugs if these end up being converted to JavaScript numbers, you know, it’s important to convert things to strings instead, so, you know, that’s the best practice that usually happens, but it’s just -- it’s just pretty difficult to hold everything to that. And it leads to a risky situation, risking bugs. So this is why it makes sense to have it built into the platform, because this -- because it can be used for interchange and because both across libraries and across systems, and because the existence of it can reduce bugs further. Something like temporal, you could article that temporal should also just be a library, but we’ve decided, I think for these same kinds of reasons, that it makes sense for it to be built in, because it’s widely used, of course we already had a built-in data type, but if we didn’t, we should have been adding one anyway. It makes sense for us as a committee to add things to solve shared problems. I think survey results show that this is such a shared problem. + +MLS: Talking about data exchange, we’re probably not going to extend JSON to support this. So we have to come up with some other customized way to exchange data? + +DE: Well, the JSON source access proposal was built partly to solve this. You know, it solves it just as well or powerly for BigInt, it it was designed for in decimal. But we can unfortunately not change JSON. JSON has a lot of things it’s lacking. Many RPC systems use either abstractions over JSON, which could be extended to support this or buffer protocols that can be extended. In Bloomberg, we have an RPC system which does have a decimal type in its schema, and I think our decimal system is not exceptional. It’s not irrelevant even if JSON is immutable. + +DLM: I’m wondering why not use strings for interchange. I see your argument about how people cannot use them reliably. But I don’t see how adding a new decimal type guarantees that people will use it reliably. I think with the point Michael Saboff just made, if we use strings for interchange, it can then also be used in JSON. + +DE: That’s in fact what people should do for JSON,. + +JMN: I think there was also some discussion about doing calculations. I think what you’re doing is discussing the simplest case, where we take some kind of input and just pass it on as a string. If we take some input and then we have to do some calculations with it, then potentially we might be shooting ourselves in the foot if we get something wrong there. + +DE: Yeah, the thing is when you have a string in JavaScript that represents a numeric quantity, it’s extremely tempting to just convert it to a number. You can pass it to all these different APIs, such as plus, that take numbers -- or not plus, but minus, that take numbers and put strings in there instead, and it will, you know, work, kind of, but it will give you rounding errors. That’s why it’s extremely dangerous to not have a separate data type, so, you know, you may use strings when putting it in JSON, but in memory, ideally you should wrap it with something else so you avoid this class of bugs. + +SYG: Yeah, I agree with MLS’s point about about JSON. I think in practice, adding new data types in the name of interchange in the hopes that the -- that we will migrate to using those just really has not borne out precisely because of JSON, such a large number of applications are JSON transformers. And I think what we see in practice over and again is that developers will do things that make it easier and simpler to work with JSON and JSON app, which means, for example, keep using objects instead of Maps and Sets. Keep using numbers instead of BigInts. I think that is just what happens in practice. And JSON being eternal is what it is. And I think the interchange hope is really quite misplaced. + +DE: I mean, you use things in RPC systems in Google that are not based on JSON that you have open source protocol buffers. I mean, although some applications won’t update, others can. Given that we’ve been adding other things to the standard library that are not represented in JSON, I’m really confused by this argument. + +SYG: What are you confused by? + +DE: Like, why did we add temporal when temporal doesn’t have a representation in JSON? + +SYG: Because temporal is independently useful without being used for peer interchange. + +DE: So we went through examples based from the survey data where people expressed that there was utility from this. Like, locally. + +SYG: Perhaps they hoped that it would be used for interchange. I’ll take the other side of the bet that it won’t be. + +CHU: Well, coming back to the original question, why does it need to be taken to the language, well, coming from a user perspective, I can only (inaudible) for the service. And it’s not only financial applications. We need for such simple things -- so we need it for subletting the slider, and I agree it’s not a performance, but it’s about the balances, and it’s about that right now we have to resort to third party libraries, and we have to, like, evaluate them. And if this is something that is built into the language, there’s a great benefit. It takes away this burden of selecting something and, yeah, as we discussed, there’s input. + +DE: So do you find use cases in the front end or only in the back end or both? + +CHU: We had a very simple use case in the front end, so I think it’s not only something that is Bloomberg is interested about, but it’s every data, as has been mentioned, this is needed. So we had a use case that we needed to scale from it, so we just -- we have a simple yard component, some slider. We can put a scale to it. And then you just pass it and you need to, like -- if you -- basically (inaudible). So that was our use case, and I can totally relate to the things mentioned. + +DLM: I guess I’m next i just wanted to say that I was very surprised in the survey results that so many people were doing financial calculations on the client side. But in finance it is, but it just doesn’t seem like it’s that great an idea to me. And I’m not sure if it’s something that we really should be encouraging. + +DE: That’s a legitimate point. I think different -- I’ve heard different things from different people on this. Some like to send all their calculations to the back end to do, for more kind of security, because even a preview could be interpreted as something significant. But we’ve also heard that it’s been a lot on the front end. I think that the front end case that you just mentioned was pretty legitimate. + +CHU: It’s not about, like, doing calculations on a project or activity. It’s very common that you do something, you know, like to send the (inaudible). And don’t get decimal wrong. Those huge, big financial calculations, as I said, it’s about a slider. This has nothing to do with slider, like, selecting our own, like, the -- some kind of size of machinery. It’s not about financial calculations. + +DE: RBN, you made an interesting point in the chat. Do you want to speak up here. + +RBN: The -- I was -- in a way, I kind of echoed Bradford’s statement in the chat as well, that this statement before that don’t do financial calculations on the front end in the c client, do it in the back, and the front end could be no JS and the front end could be electron. + +JMN: That’s something we’ve also consistently seen in the survey data. We’ve seen multiple cases of front end and back end use, so I think it’s hard to draw a line here. The use cases go both ways. + +CDA: We have Dan Minor next in the queue, but we have only a couple minutes left for this item. We’re not going to be able to get through all the items in the queue. SYG? + +SYG: Which item was this? Oh, yes. The main one. Okay, I’m leaning towards not doing this in the language, but I want to emphasize the word “leaning.” I find Waldemar’s correctness foot-gun thing argument fairly convincing in, that if we’re not doing operator overloading and V8 remains of the position that we would not like to have a decimal as a primitive. And given that constraint, the literal syntax poses significant correctness foot gun for the reasons that WH has brought up. I think I’m -- we don’t really have any -- like, we would not block a non-primitive built-in type probably without the literal syntax. The utility of that is diminished, but if the users who would -- the developers who say there is demand for this and would use there can live with that, then I think that is a reasonable path forward. But if instead the preferred path forward is to go back to a primitive or to keep this correctness foot gun in, then my feeling is leaning towards not doing this in the language and continuing to use user libraries. + +DE: Could you elaborate on your position on primitives, and also could we hear from other implementers how they feel about adding a primitive type. + +CDA: So just real quick point of order. We are out of time. We do have time for a continuation tomorrow in the morning, ideally, but I think we might have some time in the afternoon as well. But we are out of time for this item. We need to move on. + +JMN: Could someone capture the queue. + +CDA: I’m doing that right now. + +JMN: Thank you. + +SYG: If we want to do this tomorrow in the morning, by the way, I won’t be here in the morning. + +CDA: We can discuss the continuation asynchronously. + +### Summary + +Key Points: + +- A detailed sketch of a path forward for Decimal was offered +- The speaker solicited feedback on the data model (Decimal128), rounding modes, and generally, whether we want to do Decimal at all, and if so, in what form. + +### Conclusion + +- The issue of normalization vs. non-normalized Decimal128 was raised. It appears that we do not have consensus on this issue. +- The issue of fidelity to official IEEE 754 Decimal128 was raised +- We discussed whether the modest proposal made here is good enough for JS developers + +## Meta-review of Stage 3 proposals + +Presenter: Peter Klecha (PKA) + +- [slides](https://docs.google.com/presentation/d/17LEF7f7vU53cOawMphJwOnG59R_Au5bnJhIdLYn30cM) + +PKA: Yeah, so hello, everybody. What DE and I would like to do today is to just briefly refresh the committee on the status of our Stage 3 proposals en masse, with a particular eye toward identifying any possible bottlenecks or stalls in various proposals or even possibly just identifying new ways forward. Concretely, we would like to eventually do this every so often, every maybe three or six meetings, where we identify next steps for proposals, possibly including encouraging renewed effort in writing tests, renewed effort in implementation work, identifying issues to fix in the proposal, maybe even adding new champions. I think we saw that to great effect with the grouping proposal. And maybe even in some cases revisiting proposal stage. But to be very clear, you know, we don’t -- we’re not here to present anything in particular about these proposals. The point here is to prompt the committee to discuss things and to prompt champions to bring any issues that may exist to light. + +PKA: So we’ve heard from most of our Stage 3 proposals pretty priestly. Six of them were presented at this meeting. I should say six of the proposals that started a meeting at Stage 3 were presented and another five have been presented in the past calendar year, so there’s really no need to say anything more about these. Although, I whether just note I marked some with an asterisk that at least according to the GitHub readme are in need of tests. And I want to highlight this as an opportunity for anybody who may be especially new to the committee, looking for an opportunity to participate, this is a good chance to help get something across the finish line. So anybody who is in that situation, reach out to the champions of these proposals. + +### JSON Modules + +PKA: On that, I’ll move on to the five proposals that we haven’t heard from in this past calendar year. The first is JSON modules. Among whose champions is DE. This proposal has tests, has been worked on implementation. It was last presented in January of 2021, so I now want to ask DE if he’d like to share anything about -- briefly about this proposal. + +DE: Yeah. Even though we haven’t brought it to plenary recently, I think JSON modules are moving along well. The big thing blocking them, shipping and conditional browsers beyond chromium was the ongoing discussion about import assertions or import attributes now. Now that this is settled, I would encourage everyone to implement and ship this proposal. We have some tests, I’m not sure, maybe some of them need to be updated for the import attributes change. Both in -- I think in test 262 as well as web platform tests. So please let me know if you have any concerns, but other than that, I think it’s doing well at Stage 3. + +PKA: Are there any thoughts or questions or comments, not to relitigate any details of the proposal, but thoughts or questions or comments about the proposal status for Dan from the committee? + +DLM: I was just going to say this is still on our road map for SpiderMonkey. It was something that a contributor, I’ve been working on, and I’ll continue with and we’re trying to pick it up Ourselves. We can add more information on the status. + +NRO: Yeah, for JSON modules, there is still one unsolved point, which is about how, like, SP and the fetch headers for fetching the models, like, like, they were the whole reason why we were discussing processions and, this is being sought now in HTML, but it’s not finished yet. + +DE: Right. Thank you so much for that correction. This will be one of the first times that the web platform is fetching JSON directly. So I don’t think there’s, like, an established JSON destination type for fetch. Yeah. + +### Legacy Regex Features + +PKA: If there’s nothing else, we’ll move on to the next proposal. Legacy regex features in JavaScript, champions are MM and CPE. This proposal does have tests, but not sure about the implementation status. It hasn’t been presented for quite some time. I’m not sure if either MM or CPE are with us. If they are, I’d like to prompt them to -- + +CM: They are not. + +PKA: Okay, so, DE, I know you’ve been most recently in touch with Mark. Do you -- are you able to summarize his recent? + +DE: I was in touch with MM? MM was in favor of us discussing this at committee. + +DE: So what Mark told us was that at this point, the SES itself service to the kind of security issues that led to them specifically proposing some of the details about the way that these legacy regex features intersect with SES. However, regardless of whether we stick with those details or not, I think this is a really worthwhile proposal because it describes the semantics of a corner of JavaScript that has to ship in implementation for them to be web compatible and the different gentleman have script engines have slightly different implementations of. I think this is excellent work from CPE, and I hope it can be picked up and that we can arrive at fully interoperable implementations without these differences somehow. + +MLS: I’m not sure that this is as important for web compatibility since all the engines have slightly different semantics and implement some or the other global values. And there’s been no movement since -- for six years. There doesn't seem to be a big clamor for this. All browser implementations are probably 80% compatible, but the venn diagram for all implementations is probably closer to 50 or 60%. + +DE: While that’s true, and I think it’s more like 98 to 99% than 50 or 60%, this committee has previously worked on standardizing lots of things down to the last detail. And so have other standards bodies in the web platform. And I think that’s inherently valuable even if the cases are not currently causing everyone to have huge issues. + +MLS: I believe that they will -- many of these could hurt the performance of various engines because of what they actually do. + +DE: In that case, it would be good to understand the performance requirements more, and how the the current proposal does hurt performance so that we can iterate on it. At least towards a perspective of matching between engines. I think mismatches between engines in cases like this tend to be kind of random and not actually based on differing performance requirements from different engines. + +MLS: Well, I’m not too interested in this proposal. Let’s put it that way. + +DE: That’s fair. + +MLS: Because of the implications that it has, especially in the higher tiers of our engine. + +DE: Could you elaborate. + +MLS: Most of these proposal is adding values on the global RegExp object, usually as related to what happened with the last match. So we do matching, in line matching at our highest tiers. We would now have to populate those values while we’re doing that kind of matching. + +DE: I’m confused. You already support this API? You just have minor differences we support some of them. There’s some things that this contains that you don’t support? + +MLS: Yes. Every implementation has parts of this proposal that they support. We cover many parts of this proposal, but we don’t cover everything. And it’s worked to add them and they could affect our performance. Because now we’re populating things. Currently none of our customer care about this proposal + +DE: I see. This was an issue that I wasn’t aware of. I thought the only change here was the sort of roam properties. So I think -- + +MLS: The proposal is basically adding properties to the global RegExp. + +DE: We all agree that the API is bad. But it’s -- so it’s -- I think this leads to a very concrete, good thing that we could do next of pruning the ones that are not supported in all the browsers and seeing if they can with unshipped from the browsers that do ship them. That’s concrete thing we can do in this space, right? + +DLM: A follow on to what MLS said, it’s something that we discussed every time we do internal reviews of Stage 3 proposes for implementation, but it always kind of floats down to the bottom just because while we acknowledge there are incompatibles between the engine, they’ve been there for a long time so it never seems like it’s a very pressing thing for us to implement. + +DE: Is there a comment from any other implementations? So we could withdraw this proposal. We could say, look, we just don’t care about interoperability here. This is just a weird legacy thing that we’re not going to fix, unlike the other things that we’ve worked on fixing that are weird legacy. But we should probably do that rather than leave it as a Stage 3 proposal, which everybody does not plan on implementing. + +DLM: That sounds like something to discuss in the future. Maybe since shoe very not here, we can’t have that discussion. + +DE: What do you think about this proposal, SYG? + +SYG: I do not think of this proposal. I don’t really know anything about it. + +DE: Should we withdraw it in this case? + +DE: Okay. Does anybody want to work on this proposal going forward? + +CM: I think it would probably be worth getting in touch with CPE. Mark’s connection to this is sort of tertiary. He is basically sponsoring CPE to do this work. I think touching back with the champion might be a good place to start. + +DE: Great. I’m not sure if I have CPE’s email address. Do you think you could start that thread? + +CM: Yeah, I could ping MM. + +DE: Great. Thank you. + +### RegExp Modifiers + +PKA: Great. Thank you, everybody, for that discussion. Our next proposal to discuss is regex modifiers, the champion is RBN. According to read-me, there are not tests and I’m not sure about implementation status. This was presented a year ago. Is there anything you would like to say about this proposal. + +RBN: This is in my backlog currently. I switch to focus to getting `using` in place. + +PKA: Great. Thank you. Does anybody have any thoughts or questions for RBN about this proposal? + +DE: Question for RBN. Would you be interested in working with more collaborators from committee on this, if you’ve had limited time to work on it? + +RBN: Possibly. There’s not really much that needs to be done for it. It’s just the small amount because it’s not terribly complex from a featured implementation perspective. It’s just more a matter of how much time I have had. So if there are folks that are interested in assisting, I would be more than happy to accept some additional contributions. + +DE: That’s great. What do implementers think about this proposal? You are working on it? Do you think it’s not worth it or anything? + +MLS: We looked into it. I’ve looked at what we need to do in our engine, and it’s possible, but it’s not a high priority given the other things that we have. + +DE: All right. Is that -- was that a thumbs up from DLM? No. It was not. Do you have anything to report from Mozilla? + +DLM: I just looked it up, and, yeah, we haven’t been tracking this closely at all. I have no updates. + +SYG: For V8, it’s on the queue. I agree MLS that it is also not high priority. I don’t have a regex expertise, so I can’t speak to the work involved here or if it’s possible or not possible. It’s on the queue for the regex folks to take a look. + +DE: Okay. I think somehow this demonstrates a mismatch in priorities between, you know, subsets of the committee and the -- what the committee had consensus on as a whole, because we agreed to Stage 3 for this, which kind of expresses that we all feel this is motivated. I think we should keep an eye on whether folks still think it’s motivated. Given the prioritization. + +SYG: So I want to -- can I respond to that? I think historically, when things reach Stage 3, there is an agreement that it is motivated, but that does not say anything about the relative prioritization for each specific engine’s backlog. Like, that -- I think given June 2022 and July 2023, that’s not terribly long. Like, you are suggesting that this year and one month suggests that the implementers are signaling that we’re not going to do it? Because that’s not the sense I have at all. It’s just that it’s on the backlog. + +DE: Probably I’m over indexing from the unexpected hostility towards the previous proposal. And just overreacting. So, yeah, no, I agree that that’s not a huge signal. + +SYG: Like, we could say -- like, we could start saying something about relative prioritization, but, yeah, they a different discussion that we ought to have. Like, we just don’t have that norm right now. Things hit Stage 3 and then it’s up to each implementation to prioritize the things that implement ships, in which order. + +DE: Yeah, I think that would be useful to share different organizations thoughts on the priority things. But I agree, that’s not the discussion right now. + +PFC: I wanted to point out that there are partial tests for this. But they are not merged into test262 main yet because they need some work on the test generation code. So that would be a good place for somebody to get involved in they’re interested in fixing a bug and making that test PR mergable. I think the test PR covers the new syntax, but not the behavior yet. + +SYG: And I just want to say the fact that regular expression proposals are generally slower is just because there’s, like, a number -- like a single digit number of people with JS regular expression engine expertise in the world. So I think it’s just usually a staffing question, not really even a relative prioritization question, or rather the relative prioritization question is just determined by staffing rather than, like an actual formed opinion on how important we think something is. + +DE: Good. That’s a relief. So this proposal can continue moving on. And if anybody has time to contribute, if anybody is interested in contributing to test 262 tests on the behavior of regex modifiers, that will be very much welcome. + +### Duplicate named capture groups + +PKA: Okay. Thanks, everybody, for that productive discussion. Our next proposal is duplicate name capture groups, the champion is Kevin Gibbons. There are tests and there is implementation work. This was last presented also about a year ago. KG, would you like to share my brief update about this proposal. + +MF: KG went to bed but left me with a message. He is in California. He said that the tests have been available for like a year and he is waiting on implementations, so his message to implementers is please, please, please implement this. + +DE: Yes, and thank you to probably MLS for implementing and shipping this. Is that accurate? + +MLS: Yeah. And we’ll ship it probably in the next several months. It’s available to STP now, it’s available in seeds, so I’m -- I’m just fixing a bug today. + +DE: Oh, great. So this all seems on track. + +MLS: I’d like to see some of the other engines. + +DLM: I’m working on implementation right now. + +MLS: I guess two of the four experts are in the room. + +### Shadow Realm + +PKA: That’s great to hear. And, yeah, any other thoughts or questions about this one? Seems like a -- we’re moving along. Great. The final proposal is ShadowRealm with many champions. There are -- there’s some work on tests, there’s some work towards implementation. This was last presented in December. Is there a champion present for this proposal who would like to share an update? + +CM: Yeah, but I don’t think any -- I know MM is not here. We haven’t heard from (inaudible) for a while. (inaudible). + +CM: MM and I spoke about this prior to my coming here. His understanding is that we’re -- at this point we’re waiting on some web integration work. I’m not sure what that means. But that’s what he said. + +DE: Yeah, I can give an update on the web integration work, but I’ll -- we can get a comment from DLM as well, since Mozilla provided really helpful feedback here. So the web integration work has been -- we’ve been blocking on that for I think more than a year now. The champions were previously working with Igalia on it, that contract ended. And no one has picked it up since then. So there were multiple issues. One was that it was unclear which APIs were intended to be exposed on the web. There was kind of a lack of full logic for that. And then more recently, Mozilla raised some sort of uncertainty about whether we had fully added the set of APIs, and definitely we -- what we omit is tests. We only test the existence in IBL of things in ShadowRealms, the existence of the properties, but we need to also test the functionality. Because in practice, although in the specification there’s one point that says, oh, yeah, every place you look for the global object, just look for this other thing. That’s easier said than done and has to be replicated in different ways across the code base. We’ll need both the tests and the audit of the specification to move forward. I think Leo said in his communication with us they’re going to look into it in sales force, so I think we should revisit this in the September meeting. Whether that is to have an update on the progress or consider a demotion to Stage 2 if the project remains unstaffed. + +DLM: That was a great summary of our concerns. Thank you. And the only thing I’d add is that we consider implementation blocked until these concerns are resolved. + +DE: Yeah, and I would note that the three browsers do have implementations of the core ShadowRealm logic, which is not, you know, a trivial thing. So it would be -- it would be quite unfortunate if we could not work out this last bit. + +SYG: Yeah, just that V8 has discussed the same concerns with SpiderMonkey, and the SES folks and agree with SpiderMonkey’s concerns and we also consider ourselves blocked until the steps are done. + +DE: So is anybody interested in getting involved here? I would be happy to sort of mentor them or anything. Okay, I guess we’ll leave it up to the sales force people, or if anybody wants to volunteer offline, you are completely welcome to. Just get in touch with me or the champions. + +PKA: Thanks, everybody. I hope this was useful. I think some productive things were said and learned. I don’t know if there’s I know questions comments or -- at a meta level about this presentation, about repeating it, about potentially doing a similar thing for Stage 2 proposals, of which there are many more. So obviously not all online committee time necessarily, but doing a similar kind of review in hopes of prompting forward or backward progress on those. Open to any questions or comments. + +DLM: I just wanted to say thank you for doing this. I thought it was very helpful and it was nice to be able to discuss this with the other implementers at the same time. So I think it’s particularly helpful for Stage 3 proposals, but it could be interesting for Stage 2 as well. + +PFC: I want to mention that some of the tests that -- or some of the proposals that do have 262 tests like JSON.parse source access and resizable array buffers and temporal to a certain extent, they all have tests in the staging folder of test262. So another way for more people to get involved would be to help with the effort of porting those to the main tree of test262 since that is a requirement for Stage 4. + +DE: I want to raise something that was raised on the chat, that we do is have a lot of proposals in Stage 2 and 1 that haven’t been discussed in a long time and maybe just kind of unowned, and I hope that discussions like there, whether in plenary or not, can be a good way for us to decide on what to do with them next. I’m really interested in any feedback that you have offline also about how to approach this problem. + +CDA: Okay. There’s nothing else in the queue. We are right at time. So thank you, everyone. Correct me if I’m wrong, I don’t think we need a summary for this item? + +DE: I think we need a summary. We have five different things we discussed and came to conclusions for each of them. + +PKA: I’m happy to write that offline. + +### Summary + +Six (6) Stage 3 proposals were presented during this meeting: + +- Temporal +- Resizable/growable Array buffers +- Set methods +- Sync iterator helpers +- Import attributes +- (Async) Explicit resource management + +Two (2) proposals went to Stage during the meeting: + +- Array grouping +- Promise.withResolvers + +Five (5) Stage 3 proposals have been presented this calendar year: + +- Array.fromAsync +- Float16 on TypedArrays +- Decorators +- Decorator Metadata +- JSON.parse source text access + +Five other Stage (3) proposals were discussed in greater detail: + +- JSON Modules: This proposal is active, and awaits some work on HTML integration +- Legacy RegExp Features in JavaScript: Implementors have lost interest in implementing this proposal, and possibly the champion group as well. CM was asked to reach out to champions' group. +- RegExp modifiers: This proposal is active and on the champions' backlog, as well as the backlog of implementors. +- Duplicate Named Capture groups: This is awaiting implementation. It will ship soon in Safari and is implementation work is now ongoing at Mozilla. +- Shadow Realm: Implementors are blocked on HTML integration. Champions were not available to share their updates on that work. diff --git a/meetings/2023-07/july-13.md b/meetings/2023-07/july-13.md new file mode 100644 index 00000000..e16128f2 --- /dev/null +++ b/meetings/2023-07/july-13.md @@ -0,0 +1,1243 @@ +# 13 July, 2023 Meeting Notes + +----- + +**Remote and in person attendees:** + +| Name | Abbreviation | Organization | +| -------------------- | ------------ | ----------------- | +| Jesse Alama | JMN | Igalia | +| Ashley Claymore | ACE | Bloomberg | +| Michael Saboff | MLS | Apple | +| Samina Husain | SHN | ECMA | +| Istvan Sebestyen | IS | ECMA | +| Waldemar Horwat | WH | Google | +| Shane F. Carr | SFC | Google | +| Linus Groh | LGH | Invited Expert | +| Jonathan Kuperman | JKP | Bloomberg | +| Nicolò Ribaudo | NRO | Igalia | +| Chris de Almeida | CDA | IBM | +| Daniel Minor | DLM | Mozilla | +| Daniel Rosenwasser | DRR | Microsoft | +| Mikhail Barash | MBH | Univ. of Bergen | + +## Using WebAssembly as a polyfill for ECMAScript proposals + +Presenter: Shane F. Carr (SFC) + +- [slides](https://docs.google.com/presentation/d/1MKceo1Pn1PvuMz1WkzGwIpbT5qRNZVZRxY3rgcPJOKI/) + +SFC: Yes. Perfect. I will go ahead and get started then WASM models and as polyfill and libraries. I am SFC. Most of you know me, maybe not everyone. I work at Google on the engineer team and have been participating in TC39 for five years. Yeah. So I will give a presentation here about the – some of the problems that I saw as part of my work in this area. + +SFC: This is a little flowchart regarding the current state of what my team is working toward with regards to portable I18N, I followed by 18 letters followed by N at the end. We have abbreviated this I18N. So I ICU4X is a library we have been working on. It’s written in rust. And one of the goals is that we can deploy to a large set of platforms or surfaces. The way we do this is, by using CFFI on languages that compile to native. Like the ones shown on the left. As well as Wasm to deploy to the web platform. As shown on the right. So with this system, we are able to deploy to a large set of languages. This is one of the ones we working with as an example, the native on both the left and right. JavaScript is on both sides. You know, so there’s a lot of opportunity here where we implement this once and deploy everywhere. It’s the type of algorithm if object you would like to implement just once because there are a lot of inconsistencies how the algorithms are performed. So we also have this through the web platform, the ECMA402 int object. So like why do we need if people can use it direct will? So one request we get a lot is that, for example, feature coverage needs to work across browser versions. Where if not all browsers are up to date with the latest version of the spec, you need a polyfill: the second is people want increased locale support. + +SFC: So for example, they might have, you know, a certain browser might support 50 languages, but a certain application may want to support 70, 80 or 100 languages. You can do a polyfill for the languages not available in the platform. A third use case is that they want consistent behavior between web, server, and mobile. So, for example, in the case of other flutter, running on the dart run time, there is both a mobile version and a web version which used different in the behavior, but like would to have a consistent behavior. If you prerender on the server, once you deploy, you may want the same behaviour on both of those surfaces. The fourth is teams have lots of constraints, like testing, it might be you know, UX designers, we push people as much as we can toward using the browser and most people do. But there’s also use cases for int polyfill. + +SFC: So this is what an web app using an intl polyfill looks like. Once you use ICU4X, on the left, WASM. And ECMA402 on the right. Now, the browser engine being implemented with ICU4X will, the web app is able to access that behaviour via the ECMA402 surface built into the browser which is great. + +SFC: So I will prefix this next section of my presentation with same . . . this mostly works and it mostly gets the job done but the reason that I am giving this presentation here is because it’s not as – it’s not as easy to implement as it should be. This type of model that I am showing you here on the screen is the type of thing that I think should be, you know, very easy to deploy on the web platform, one the default ways to deploy a polyfill. The current state-of-the-art makes it so that there’s a few more bumps in the road than there should be. I am going to go over what the bumps in the road are and what to do as a group to improve and make this better for people building libraries with Wasm. One challenge is that – not really a challenge, but the observation I have made is that Wasm seems to be popular as an application frame work. When you build something in Wasm, I am doing it big monolithic application or large component of the application that is sort of self-contained. Where, you know, maybe it needs to render things to a canvas, but other than that, that is the extent of, you know, the interaction between the Wasm and the rest of the web platform. So like, for example, you might want to deploy an online Photoshop and it is built in Wasm and works and great. It’s a library for an application primarily written in TypeScript or JavaScript. So since Wasm is seen as an application frame work, it’s a library is not the focus, but a part of the web ecosystem. + +SFC: Second is memory management. So JavaScript of course is garbage collected, but rust and C + + uses linear memory. But this never shrinks. The GC can go and run and try to reduce the size of the Wasm memory, but it’s never going to happen. So garbage collected languages if using Wasm-GC get around this problem but it doesn’t help us in rust and C++. I have a demo. I will move over here. It’s super simple. I can show you the code. I have got this Wasm module. It’s called bigbox.wasm and has a single function in it. It's the functions called `createBigBox()`. When I call this function, I pass in a length. And then the length goes and it allocates as buffer of that size, empty buffer of that size in linear memory. That’s all it Z another function of `freeBigBox()`, you call with the pointer, it frees if from the linear memory. So that all that WASM does is, it has the two functions on. I have a function called create big box that is, you know, takes into the length. Creates, calls the make big fucks function and put in his a finalization registry, and call it is and print us out a message we see that it actually got to lead it. Note that when I call `createBigBox()`, like I am dropping it right away. There is nothing retaining the memory here. So this is object that I put in the finalization registry is dropped at the end of the function. Nothing is retaining the object. We have to wait for the FinalizationRegistry to go. Let me go back to the demo. So I want to make – click this button. I called and show you with one megabyte of memory. We have time, I will do it again. Yeah. I can click this several times. Nothing is really happening. Go ahead and press here. It finally collected them all at once. In this case, I allocated about ten of those objects. And then they finally got collected which is great. Now I can get another ten. Clicking this button a few times. Allocating more, right. You can click this a few more times. It’s fun to click. I am going to try collecting memory. And I am glad it collected memory, but it’s not freeing up any memory that’s in the JavaScript space. If I open the web inspector it will show you the amount of memory being consumed by this application is still the same. Now, I can also hit this button. And like once I hit this button enough times it will collect the ArrayBuffers. Sometimes this runs better than others. It seems to not be clicking. There is goes. I created a bunch of ArrayBuffers and collected those. The difference; when the collects them it’s freeing the memory. With Wasm it’s not. So this is not a super great situation to be in. Let’s go back to the slides. The second – the third challenge – the first was application. The second management memory. The third challenge is binary size. + +SFC: Wasm needs to ship its own standard library. Now, ICU4X reduces the binary size, which is very small. You know, it’s not going to be as small as JavaScript ever is because like if JavaScript, we have maps and arrays and free memory allocation and GCs and everything. With rust we shift all of that [loi] breathalizer(?) rooms ourself. We are very small. I want to be clear. We are poor demo is like – on the order of 15 kilobytes non-gzipped and less than 10 kilobytes gzipped. It’s small but not tiny small. We have done a good job as reducing the binary size for ICU4X. We are much smaller than IC compiled to WASM. I have a graph. But it’s like basically – it’s only 100 times smaller than ICU4X because of the focus we put on this. It’s not going to ever get down to zero. You know, as developers we like when things are O the standard libraries requirements want to be completely zero. So that is a challenge. + +SFC: Challenge 4, I have talked about before to this committee is async loading and modules. So Wasm of course requires an async call to compile or instantiate a model. Everything is needs is also async. Normally with JavaScript, it depends on it, like if you write an item in the library in JavaScript only, like all the constructors, everything, they are regular JavaScript functions and sync. They don’t need to be async. But with Wasm everything is automatically like you have to use async. So there’s – you sort of – when building the application with the Wasm library you have to pick between the lesser several evils of how to deal with. One is to wrap the Wasm in async JS constructors. That’s like if you want to create an format or like Intl number but created in WASM that is an async [cet]er. You have to await the new number format. That’s one way to do it The second way is you can use async modules which are fine and great. They have problems that like many of the people this committee have talked about. They kind of work and great to work better. And they kind of solve the problem. But they are also like once you have an async module in your module graph and you have to start thinking about async modules everywhere. It’s an integration point. I want to use this, oh, but if I do, I have an async module. Maybe I don’t do it now. Because they are hard to integrate. That’s not a great situation to be in. + +SFC: A third solution is that you can use the non-async WebAssembly constructors which have big, red warnings in the spec that says don’t these in order to get the sync behaviour. It’s not entirely clear to me. Like, whether like if – at least if you have the WebAssembly module and wanted an instance, like how bad is that actually? It’s not clear to me. Like, why we sort of recommend that as an async function. Maybe we can get some more clarity during the discussion on that. The fourth is like just how globally defer code execution until WASM is ready. A lot of applications already have a piece of code in the application which is wait until, you know, DOM ready. A lot of applications don’t execute JavaScript until the DOM ready event. So like you can instead I will wait on DOM ready, but on Wasm ready. You can do something sort of like that. And then sort of assume that the Wasm is already loaded. That is another way to get around this problem. + +SFC: Yeah. So another challenge is ergonomics. So it’s no secret that Wasm JS interop is not as good as it could be. There are proposals in the pipeline to make this better. You know, the import reflection/source phase imports proposals we are considering in TC39 will help by making it to be to import a Wasm module like using a regular import statement. It gives it to – you is to instantiate, but it will be in your regular imports instead of like some other weird, you know, piece of code that you have to use. + +SFC: Another on the WASM side, is interface types, which appears is renamed component model. This will allow WebAssembly and JS to basically share the type system there. Which will be really nice. And the one that I am really eager and waiting for is Wasm ESM, full ECMAScript Wasm integration. Once this happens, it will make these ergonomics a lot cleaner. It seems like, you know, this is still a few years out because like we have to agree on, you know, source phase imports first and things like that before we can get to the point of actually implementing Wasm ESM. I am looking forward to when that happens. + +SFC: Another problem with – so with ergonomics, you can make tooling to like work around these ergonomics problems. But like right now you pretty much need to use tooling because like if you do try to interop with JS and Wasm directly like it’s very, very difficult to do it correctly. Which basically means you must use tooling in order to get around these interop problems. For example, you needed to use emscripten to generate binds for you. If you don’t use emscripten and you have a lot of work, you have to do it yourself. It’s not great, that it requires a lot of tooling and boilerplate to make it work correctly. It works but nice to work better. + +SFC: So how ICU4X approaches this problem. How does it make the best of way the world the way it is to deploy rust code as a Wasm module for ECMAScript clients? We don’t use emscripten because we want this low-level control, and we have the time that like since this is such an important feature for us, we decided that we have the time to invest in what is the right way to solve this problem. We wrote our own tool called Diplomat. That generates binding code for us. That, you know, it generates binding code for how to do FFI calls and Python and dart and C++, etc. And it also generates the JavaScript bindings in WebAssembly. + +SFC: I am going to reload this page. I now it’s cached, but that little spinner thing that comes up at the beginning is wait until the Wasm is instantiated and then let’s me use the page. I am using it like, just like block the page until the Wasm is ready. I have the Wasm open here now. As an example I can format a number. Right? And formatting for me. Using this all in Wasm client side, of course. I can change the locale. I can do didder make the number bigger and change its using the 3222 grouping separators. I can turn off the groups separators. Back to English. This has all the locales. Yeah. I can use like Hanidec numbering system here. Turn back on the grouping separator. Yes. Some cool things, formatting this number. Go to the next tab. I can do some date time more matting examples. I go this to yet an idea how this works. Yes, it does actually work most of the time. So I can format like dates in different styles here. This is the current date and time. I can change the calendar system. We have more calendar systems that are now implemented. We haven’t put them into the Wasm demo yet but I can do this in the Ethiopian calendar, in the Japanese calendar. Yeah. So I can do this. I can change the language. Japanese in the Japanese calendar is cool. The third demo is segmenter this is the popular feature right thousand. We actually have a full like, you know, 14 client segmenter that is implemented in a Wasm file. Which a lot of people want. So like, this is a string of Japanese text and then these little red dots are like the word boundaries. Some of the words are 180 ideographs and some are 2 and you can see how that segmentation work here. + +SFC: I am going to check the inspector. Everything is happy. I just opened the web inspector. I don’t know why you can’t see it. It looks like everything is happy. Sometimes the Wasm file runs out of memory and then we get errors in the console. But today it’s happy. So that’s good + +SFC: Let me switch back to the slides. I assume people are not afraid of code. I will show you some code about what – I just showed you with what the code looks like. So this is, for example, the decimal formatter. So it has – we have currently a module, and the module exports a called ICU4XFixedDecimalFormatter and WASM – it depends on the WASM instance which exports various systems. This class is fICU4XFixedDecimalFormatter. A constructor. Private. The main way to use the function called create. So you give it a date and a locale. So I will tell you about what this interface is. The maintaining thing that is interesting, it’s between the was I am and JavaScript. We call WASM.diploma_Alloc. It uses this to allocate a buffer of length and 5 alignment 4 I believe is what this means. And then this is basically a constructor. Which then gets a return. Then well call the WASM.ICU4X fixed did he see ICU4XFixedDecimalFormatter_create. And the this takes reference to the buffer we created which is the return value of that function. We can’t return values from Wasm unless integers. But right now we can only return integers. So like the function returns an integer which is like the pointer in is like the point never memory. In this case, we pass in this buffer. So then it puts all the return values into this buffer for us. And then, yeah. We also pass in pointers to the data provider and the locale which are required argument that is we pass in. This `.underlying` is the pointer within the memory of this object. You pass this two object pointers basically. The data provider and the locale. I can talk more about what those arguments are, if people are interested. They are pointer passed in So then we have `is_okay`. This looks into the return value and there’s a flag, the first field of the return value is like a bouillon, if it’s a successful return or if it was an incomplete return. So then if (`is_okay`) the other field of the diploma buffer is going to have the value of like the pointer that we need. So then we go ahead and wrap it into an ICU4X fixed did he see malformatter. Call the [cut] are constructor. Pass in the underlying pointer we pulled out of the return buffer. And then like I deleted the code using `edges`. They is how we about our other objects. I deleted because it was getting too long. I can show you the full code, if you want. Yeah If it’s an owned object, mostly – usually owned, it has `true`, I believe is what we pass in here. Then we register it into the FinalizationRegistry. Which is the destroy function. When the GC runs, it calls the destroy function for us. If it was not – either way, if it was successful or not, we free the return buffer. Which is the again value of the constructor. + +SFC: So that’s most of the code. On the right is a format function which is imperatively simple T creates – so the format function is to return a string. We can’t – it only return integers. We have the writeable which is like a buffer that can be written to. And you know, I don’t have to bore you with the implementation details of the writeable, but then basically we sort of do this same sort of thing where we take the writeable, pass it into the function, the function writes to this output buffer. And then we get the values, if it’s okay or not and return the values from the function and this writeable abstraction is the magic thing. I can show you how it works if people are interested. Or talk to you later about it will write out to a buffer string to convert from UTF-8 to UTF-16. At love this code is code that if use emscripten or Wasm bindgen it’s generated for you. Especially like since we are like the library, we wanted to implement this ourselves and see if we can do this and what are the main points. This is code we implement ourself. We use our tool, Diplomat – the last slide here is about explicit resource management. + +SFC: The last – close to the end of the slides. Explicit resource management. So ICU4X uses the the FinalizationRegistry, which is not the best solution for this because like, you know, Wasm memory grows in one direction. So one cool solution can use, thanks to Ron Buckton for explicit resource management, you know, it also – in addition to doing the stuff advertised for file handles, does a good job with Wasm objects because the idea that the memory model of like C++ and rust, is largely you have one owner for an object and once the owner is done using it, you free it right away. You don’t wait for a GC to do it. So like one thing we can do with explicit resource management is take that model and use it in JavaScript. When you have a using, using a variable instead of like assigning the variable to a `let`or a `const`. Then you can basically have this model where you create an object and then you destroy the object at the end of the enclosing scope. Which is sort of like the rust and C++ memory model. And we can now do that in JavaScript. So one thing that I would like to do with these objects is to implement the disposable interface. So the basically like when you create an ICU4X decimal formatter, it will implement the construct certify called and free the memory from the buffer right away. We don’t have to wait for the FinalizationRegistry to decide it’s a good time to do it. We do it right away which is consistent with how it works on the rust side which is cool. + +SFC: Now, like if people don’t use using or forget to use using, they put it in the FinalizationRegistry, we also have a FinalizationRegistry like we did before. And then like we will check to make sure that the – when the FinalizationRegistry runs, we will check if `Symbol.dispose` was already invoked to avoid a double free. This solves problems, but not a perfect solution. One question I get a lot is like what if it gets enstance gets saved on a field of another type? I might have an object that is sort of owns several or Wasm objects? And then like they all want to get destroyed at the same time. Does that mean I need to implement disposable on the wrapper objects too? So it’s a little bit of a rough edge. So like it’s not a perfect solution. But yeah. + +SFC: So these are some suggested discussion topics. One is like do we consider this a priority use case? I certainly hope so. It seems like it has been considered as a use case, but not been like in the – in the dialogue as much as I would like it to be. 2 is, shall we recommend it disposable pattern? About like, if you’re using a Wasm library with a – using a Wasm library with a compiled language like rust or C++, not GC, but a language that uses the linear memory like is this a good pattern? Should we recommend people use this pattern? And then the third is like, yeah. What are we thinking in terms of the short, medium and long-term plans for Wasm model loading? I have heard answers to this question about like, you know, what is the long-term vision for Wasm module loading? Async? Sync? Is it going to like what is going to be the situation with interface types? Like, you know, I feel like I have seen a lot of people with slightly different angles on this question. And like I think this would be a good time to sort of discuss what do we see as that 3-year vision or 5-year vision for what we want Wasm module loading to look like in the system. The reason I think this to the group is that this is something that I feel that we as TC39 should like consider when designing these types of proposals and take more seriously as a use case. And I wanted to bring it to the floor to sort of like, the things that we currently have implemented here make this not as clean as they really should be. + +SFC: So I reserve 60 minutes. 30 minutes for the presentation. 30 minutes we have time for a discussion. So I would like to hear some thoughts from some more people besides just me here. So yeah. I can turn it over to the chairs to run the queue. Thank you. + +USA: Thank you, SFC. First up in the queue we have DE. + +DE: For Wasm ESM integration, you noted this idea that, in terms of what should be implemented in what order, that source phase imports should come first and then interface types and then Wasm ESM integration for instantiated modules. Earlier in the meeting, LCA and GB laid out another possible ordering where we go ahead with both kinds of Wasm ESM integration now, even though it means when you call a function, it’s exported by Wasm you have to do in a somewhat obscure way. I think it’s possible to make a directly usabel Wasm module for ESM usage by using reference types and such, along with importing certain utility functions implemented in a JS ESM module. This could then give you a Wasm module that can be used directly with a high-level JavaScript interface. Do you think that’s a reasonable approach? Should JS engines/web browsers enable Wasm/ESM integration for instantiated modules now or take this more cautious approach? + +SFC: Yeah. It’s a good question. I think that, these 3 bullet points are not intended to necessarily be in order. These are sort of 3 sort of somewhat related, but separate work paths working toward solving this problem in a nice ergonomic way. I don’t think they necessarily need to be in the order or wait for interface types before, you know, be able to load an instantiated model that we facilitate wrapper code for. I showed on the other slide, you still have to generate that code, and it would be nice to – really nice to get a point where we don’t need to generate this wrapper code as much as we do currently. But like in terms of like steps, baby steps what we need to do to get to that end goal, like, source phase imports is definitely something we need first. And then if we can then solve the sort of async module like instantiation things that is also a good step in that direction. And I believe that’s what Wasm ESM is trying to do – yeah. In other words, yes. I believe the answer is yes + +DE: This async module instantiation thing is interesting and also relates to what we were discussing with the deferred module imports. That is something we should consider iterating on with Wasm ESM integration. Also, I made this suggestion that maybe there’s a set of JavaScript functionality that done imported from WASM such that you don’t have to generate code. Based on your experience with Diplomat, is that possible? + +SFC: Can you repeat that question? + +DE: Rather than generating code, would it be possible to import a fixed set of functions that are implemented in JavaScript, for your binding logic? To pass parameters to rather than generating code? + +SFC: We have some parts of this like diplomat runtime here. We have a JavaScript runtime. It’s like 500 lines of code. It does like some of the CT8 versions and those things. It interact was like this writable stuff, with reading the pointer off the return value of this function. We do have a JavaScript run time that helps us with like some of these low-level operations with interacting with Wasm. And like the amount of codegen do here is like not terribly much. It’s 55 lines for basically – I don’t really see how we could like – maybe there’s another couple pieces here and there to pull out into a shared library, but I don’t see how we can pull out more than that. I feel like we still need some amount of generated wrapper code. + +RBN: Yeah. I wanted point out you had a question about whether the type should implement disposable and we are ownership semantics and that’s – part what have disposable stack provides is that idea of like ownership semantics over lifetimes, so it allows you to associate one or more disposable values that might be scoped to something, but then transfer that scope to another class by moving things out of a disposable. Out of a disposable stack into a new one that gets stored on the object zero, and that object, which would be the – this wrapper you are talking B would implement dispose to then used in another places and have its lifetime managed. That’s also an example that’s on the proposal repo in the README, if you are interested. + +SFC: Thanks for that. I have looked at the disposable stack. I think that this – definitely a problem that has solutions which is great. You know, it requires that like – as you said, we still need to think about implementing disposable like on wrapper objects, but like, you know, that’s just a cost I think that people will have to do. But like the fact we have to disposable stack that got into the Stage 2 proposal I think is great and it definitely makes it possible and fairly ergonomic to solve this problem. Yeah. Thank you for that + +RBN: If you have any questions about implementation, feel free to pinning me on matrix. And I would be happy to talk about. + +SFC: Yeah. Thank you. I sure will. + +USA: we have a reply to that Nicolo. + +NRO: Yeah. When using classes with disposable, [inaudible] to eliminate like to manually link your disposable function to your in the resources. We just had at use the corrector for the content thing and a global to link everything together and like – it fights very nicely together. + +SFC: That’s a good point. Decorators. That’s not one I have looked into. I will definitely look at that. + +USA: Next up we have LCA. + +LCA: Yeah. I wanted to comment more on the Wasm ESM integration. I think the goal for source imports was to make it possible to support both the use case where you need complicated instantiation of Wasm modules and the more – the currently less common use case of being able to natively integrate with JavaScript. If the Wasm is built specifically to be imported from JavaScript, thing I think the current Wasm ESM integration can provide a lot for you. Without source imports. And I think that it’s valuable to ship both of these because if we do not ship the non-source phase version of Wasm ESM integration there’s in reason for anybody to write Wasm modules that target JavaScript. No reason to innovate on how he we do bindings between Wasm and JavaScript and [anering no, ma’am]? effect way because there will have to be the wrapper. So what I would like to see from engines is, more support for this was em. SM proposal, happy it talk to you about this, how you can implement this, what order you would like to implement these things in, and obviously would love to see source phase imports but I would very much like to see the full integration shipping in engine soon too. + +USA: All right. We have a supporting reply from DE for that + +SFC: Actually, on it topic, I wonder if any of the implementers in the room sort of have thoughts on this? What I talked to – we do have SYG on the call, but he sort of said that like we want to implement source phase imports before Wasm ESM. Because there’s like a series of dependencies there. That’s what I was – heard from SYG in terms of implementation ordering. But if other engines sort of have a familiar perspective with other engines considering going straight to implement Wasm ESM? + +DE: Igalia implemented Wasm ESM integration in an engine before source imports were invented – I can’t remember if it was SpiderMonkey or JSC. Instance imports have no dependency on source imports. Previously when considering Wasm ESM integration, there was a discussion whether it should be changed to the source imports all the time. And I think this is why JSC held back on shipping. MLS, any thoughts/updates on this? + +MLS: No, we are obviously invested in Wasm but I need to catch up with that team. + +DE: Okay. Great. + +RPR: Just to check up on the status of the spec for Wasm ESM integration, I don’t think that’s a TC39 spec, that’s on the Wasm side. Is that considered to be the equivalent of Stage 3, meaning the only thing holding it back – sew, the only thing remaining to be done is implemented? + +LCA: Yes. The spec was ready to be implemented a year ago. Due to not having clarity on whether this is a direction we want to take, enclosured in the spec is to update we had source phase imports. The Wasm integration specs fits both and the source phase is part of this and it is ready to ship. + +DE: The only thing remaining is NRO’s patch on integration or like fetch integration generally about sending the right destination. This is in progress and I think probably that is a blocker for shipping, but it should be done pretty soon. From the design space point of view, we now have the full picture. + +PFC: I am not too knowledgeable on this, but I noticed there was some discussion on the chat yesterday about whether Wasm modules are a power user feature. It would be good to come to an agreement on the committee whether we think that’s the case or not because it could affect prioritization. I think there’s one point of view that said, most JavaScript programmers are not going to use Wasm modules, programming in JavaScript. Then another point of view, it's true for JavaScript but if you want to have any other language interoperate with JavaScript you need WASM modules so it’s not a power feature, but a core feature for using other programming languages on the web. So I thought it might be good to surface that because it could affect how we make decisions in the committee. + +DE: I want to hear LCA’s point of view on whether source phase imports are likely to be generated by tools or used directly as a power user feature. When using source phase imports directly, you have to instantiate the Wasm module with imports. That’s somewhat more set-up to do. Somewhat lower-level somehow. Ordinary [module instance] imports, on the other hand, are fully linked for you. Once we get the ecosystem ready–whether through a pattern like I described earlier, or ideally, eventually, through interfaced types or the WebAssembly component model which may come in the future–I think that should be the version of it that’s used for most developers. This is why I felt strongly that we don’t switch from instance to source import semantics for normal import statements. LCA how do you feel? + +LCA: They have slightly different use cases. Maybe they both – they both be power user and nonpower user things statement, I think. Like it is zero obviously significantly easier to not do manually linking. But some modules do not require manual linking or linking at all. They are fully self-contained. In most cases, this is definitely less complicated and less work to understand. But yeah. It is also less feature-full. Tooling will generate sources import or phase imports for the time being. Just because a lot of existing tooling is built around – requiring specific imports. They are specific to the compile input. The support is generated together with the Wasm itself. And these are not generic. So there’s a lot – you see the specifically in go, in rust wasm-bindgen these these are heavy on support on the JavaScript side to make this work. And these will only work well with source phase imports for the time being. As we shift Wasm integration, hopefully these tools will have incentives to try to make the space better here. + +DE: (from queue) “correct. I was talking about source imports being rare, easy and that’s all” + +RPR: I want to say I think even if this is how a user is rare, this whole topic is about using WebAssembly as a library, which means somewhere many your NPM dependency tree there is a lower level using WebAssembly, and given the way it works this ends up getting used in a significant number – if we do our job right, significant number of apps but not used directly. + +DE: You mentioned in the `FinalizationRegistry` callback accounting for whether the `Symbol.dispose` function had been called. I was hoping that it wouldn’t be necessary to store an extra bit, and instead the `Symbol.dispose` method could call `FinalizationRegistry.prototype.unregister`. Would that work? + +SFC: Yeah. Very likely. I haven’t actually gone and implemented this disposable interface yet for these types. But when I do, I think absolutely, yeah. That’s sounds like a very clean way to solve for the double problem. Yeah. Thanks for the pointer. + +RBN: This is my – somewhat related to the discussion yesterday about whether or not even considering recommendation for this type of cleanup for host Annes to do the type of FinalizationRegistry. Do the cleanup if you drop a disposable on the floor and if we had the recommendation, that was the example in the poll question I have on the resource management proposal for that recommend ago, you would unregister the disposable resource or whatever you track with the FinalizationRegistry on dispose so the finalizers isn’t even called. + +DE: So yeah. I responded negatively to that recommendation. But that was in the context of external resources like file handles. When it's a thing like memory, this is exactly why we added FinalizationRegistry in the first place, despite the hazards for external resources, because it is valid to use when it’s just to clean up memory. + +DE: You mentioned UAX14, which is Shocking⚡ because we, in Intl.Segmenter, stuck to UAX29. UAX29 is breaks like wrapping, sentences and words, and UAX14 is line breaking. We specifically decided to omit 14 from Intl.Segmenter in this committee because the idea was, there aren’t use valid cases for line breaking without more knowledge about the rest of, you know, text rendering ([past discussion](https://github.com/tc39/proposal-intl-segmenter/issues/49)). And at the same time we made that decision, there was hope around CSS Houdini, custom line breaking APIs. At this point, I think there’s more hope around the [canvas formatted text API](https://github.com/WICG/canvas-formatted-text/blob/main/README.md), which is proposed by Microsoft which would have text metrics with line breaking support for multiple line streaming rendering. I am interested in that API progressing. So your library does hook that things we decided on in TC39 before. What is your experience here? + +SFC: Yeah. We implement both UAX29 and 14. So yeah. I should have been more clear about that during the presentation we implement both of them. And currently, if you want UAX14, you have to use that because it’s not in the web platform. Hopefully it will be at some point. Like, many of the uses are using it because they need UAX14. They need it because they’re doing text layout into canvas. So they ship 14 and also ship like bunch of tools. WASM files or to do this yren did hing. They ship the whole text layout suite in a Wasm file. That’s – you can go look up the time. It’s called canvaskit.wasm. It bundles all the features into one Wasm file. It’s great to take that and turn into canvas-formatted text or CSS Houdini. I have tried to reach out to the Champions of the proposals, and they haven’t been super responsive to me. They have gotten – when I have heard responses, it seems like, yes, we still are basically intending to do this, it seems like it’s one of the many things that’s like prioritization question, more than anything else. + +DE: Do you think canvas formatted text would subsume the cases for UAX14? + +SFC: So I think that, you know, if I go to the other slide, why do one is an Intl polyfill. It’s going to be a problem for a listening time. There’s also clients who like – again like I wear two hats. I am both like the convenor of TG2 and ECMA402 standards and the tech leads. And wear two hats. I want the clients get the problems solved. So like in the platform, hopefully many clients will use that directly. But will are also reasons why clients once you continue to use the Intl polyfill, one of use cases I have heard from clients is like, you know, we really want to have very specific behaviour how to do breaking around URLs and email address asks not happy with what the web platform is doing there. It’s inconsistent across the different engines and implementations. And like they want to basically have like behaviour that they can really predict and know what is going to happen. And then by shipping ICU4X implementation, you can get the consistency they are after. I definitely, you know, I think that it’s definitely a benefit for the web. To have the formatted text availability. It’s subsumes some of the ICU4X use cases, but certainly not all of them. + +DE: Okay. Thanks for explaining. I am surprised by that email example. I thought CSS was pretty specific about how line breaking is supposed to work. + +SFC: I can share more details offline + +DE: You expressed a hope to get back to interface types or the component model, which I hope for too. This has had a complicated history. Previously, Google investigated interface types, in particular, for checking the efficiency of passing strings between WebAssembly and JavaScript or WebAssembly and the platform. And it didn’t make sense because you still had to do a copy. And now we have the Wasm stringref proposal which is a better solution. But, in addition to having an incomplete story for strings, interface types previously solved this problem of ergonomic interaction between JavaScript and WebAssembly. Current component model development has been trying to solve a broader problem, of interaction between lots of languages, which is great, but I haven’t seen progress from that world on concrete JavaScript/Wasm interaction improvements. What do you think about that whole space? And I am wondering from this committee, is there interest in establishing more of a connection and interaction between TC39 and Champions of the component model in Wasm CG? + +SFC: Yeah. I can talk about that. One thing that, you know – specific type of problem that is nice to solve that eliminates a lot of the boilerplate here is if we have an actually well-defined way to talk about structs. Right now I have this sort of opaque return value which is like link 5, 11, 4. It’s filled in with stuff and read the stuff out of it. And the layout of the instruct is specifically to the compiler. So like, rust and the other compilers might not agree what the struct layout is. As opaque as far as Wasm is concerned. Hopefully the component model or interface types will help resolve is I can sort of express what a struct layout is what fields. Wasm GC sort of does this, can but it’s a GC integration. Yeah. I guess that’s sort of the types – the type of thing that I hope that one of the proposals works toward. And again one of the reasons I am sort – the purpose of this presentation is more to like illustrate the problem space. Rather than like propose specific solutions to the problem space. I highlighted this as a potential way that help clean this up, but it may or may not be the actual solution to the problem. + +USA: thank you SFC and others for the discussion. We are slightly over time. + +SFC: We also started a few minutes late. + +USA: Yes. So if you can conclude in like 2 or 3 minutes. + +SFC: Okay. Yeah. Did you have any more responses, DE? + +DE: We can talk offline. + +SFC: We can do it off-line. Cool. We got through the queue ending on time. I am glad I reserved 60 minutes for this. As I just said, the purpose of this presentation is to layout like you know, as sort of wearing a hat is like, you know, a user building of Wasm library. Like, you know, these are sort of the challenges and road blocks I hit along the way. And I sort of wanted to take a lot – I mentioned five or six or seven proposals in the slide deck. And I sort of – one of the purposes is to try to sort of draw this big picture for how these tie together to solve a real use case. We talked about all in isolation. It’s really good to see how they tie together to solve a real problem that has big implications of deploy libraries written in any language to the web platform. So thanks for all the discussion that everyone has given. And yeah. + +USA: Thank you, Shane. + +### Summary and conclusion + +The purpose of this presentation is to layout a user building of Wasm library, and highlight the challenges and road blocks encountered. Various, five to seven, proposals are noted in the slide deck. +One of the purpose is to draw a big picture for how these tie together to solve a real use case. We talked about all in isolation. It is good to see how they tie together to solve a real problem that has big implications of deploy libraries written in any language to the web platform. +A detailed discussion was had by everyone. + +## Optional chaining in assignment LHS for stage 1 or 2 + +Presenter: Nicolò Ribaudo (NRO) + +- [proposal](https://github.com/nicolo-ribaudo/proposal-optional-chaining-assignment) +- [slides](https://docs.google.com/presentation/d/1KL9MRyxprgXDEsxT8Ddrdro074L3fQm88zXHsWL-Dwk) + +NRO: Okay. This is a stage 0, I guess, proposal. About optional chaining on the left side of [inaudible]. We can’t and also, when deleting properties, but when the proposal was initially designed, there was not yet clear the need for optional chaining assignment. Now we have like a lot more real world experience with using optional chaining. AndI realize that the more using it, I more I find cases I wish I could use the assignment. I talked with people in the community, I realize it’s a shared feeling. + +NRO: And like some examples are taken by some projects sometimes look at, like this is example for Babel. The option assign some properties. Null or undefined. There also exams from library – [inaudible] and like many places, find the pieces where there is some assignment from the objects only if they have defined. I have examples. The yren – even the stressing you can run to check if you would benefit from this somehow. Expressions are not perfect, but it helped me find all the places to use these in my next project. + +NRO: Okay. How would this work? Well, the basic case, we have like an expression, property assign to value, we have that long expression. Basically, if expr1 is not null or undefined. Otherwise, the expression returns undefined. And the value on the right-hand side is not related if expression 1 is null. And this is similar to how it behave if you use statement instead of optional chaining or if you had like a set or function that you called on `expr1` using the question mark. We also have like operator assignment. Like less equal or - equal. And this is – works exactly the same way. But for mathematical, log carbonning operators like or equal. + +NRO: However there are some other cases that we they would to consider. Such as what happens when the left-hand side expression is . . . right now, like parenthesis stop the short circuiting behaviour. So I think there’s some expectation that this would also have time with assignments. And it’s already valid to have optional chaining nested within a norm number expression on the left-hand side of assignments. Like in this example in the first bullet point, it would throw because if null because A = mark B is null so my proposal is that we just keep this behaviour and if we have an optional chain parenthesis and it’s nullish, we throw an error. The question when to throw? Because right now, they don’t have the same behaviour. If you have simple assignment, we have to evaluate the expression we’re assign and throw. If you have assignment with operator, we throw before evaluating the value on the right-hand side. What I think this proposal should do is that – well, if obviously the key is not this anymore when we use optional chaining because now it stops the proper chain. However we still keep the existing behaviour with regards to evaluates the C, the value on the right-hand side. It’s not terrible to make the syntax error and so we avoid the question. But like again, there are some cases in which it’s possible to have a nested optional chain, and so like only for one case and other cases already valid might cause some confusion. + +NRO: So I mentioned assignment. But there are still other places in the language where it might be possible to support optional chained instantiation, but it doesn’t work. And I am not including those in the proposal. What are they? Well, for new, with a – we don’t support using new with optional chaining. But like this is not related to assignment. Then we have object . . . where you might want to assign to a property, instead of this expression. But like A, I have never seen real world cases in which this would be needed. And B, there are some hard to answer question regarding how short can Ier shutting works because there is like – you cannot short circuit the object, and get property access. But it’s weird that the property access is visually on the left of a A question mark B and short circuit. On the other hand, it’s in the weird to keep at optional chain. And also, we have assignment in ‘for of’ loop. Again, I have never seen a use case for those, using optional chain I am not including as part of the proposal. + +NRO: And there is one last thing that might be included in the proposal. And I would like to have what you think about this. Which are the `++/ - -` operators. When using them in postfix position, the semantics are like of use after knowing how to works, how optional chaining assignment works. And like just this . . . However, when they are in prefix position, I find it weird that the `++` is short-circuit based on something happens after like visually after the `++` symbol. So the options are to either support both, to only support the prefix version even that version to just do `+= 1` or to support none of them. Yeah. This might prefer order, but I am happy to hear from others. + +NRO: What do you think about this proposal going it Stage 1, and just like if we get to Stage 1, I also plan to ask if we have consensus for Stage 2 later. + +USA: All right. So we have a queue. First up we have WH. + +WH: I have a couple of concerns. One is somewhat small, and the other one is not so small. Let’s start with the small one first: parenthesization. + +WH: `a = b.x` and `a = (b.x)` currently mean exactly the same thing. +`b.x = a` and `(b.x) = a` currently also mean exactly the same thing. Parentheses are used for syntactic grouping but do not change the semantics at all. We should keep that property. It would be weird if the same didn’t hold for the next four cases: + +WH: `a = b?.x` and `a = (b?.x)` currently mean the same thing. The proposal makes `b?.x = a` mean something different from `(b?.x) = a` via a purely semantic rule. + +WH: Finally, the semantics of `a = (b?.x).y` and `(b?.x).y = a` are existing behavior and don’t contain surprises. + +WH: So the thing that bothers me is, the proposed semantics have a special case to see if an expression, which can be one of many different forms, is a parenthesized expression or not. And if something is parenthesized, then the value undefined is treated differently than if it weren’t parenthesized. This is a very dangerous thing to do because in the future there may be other things which can produce undefined. + +NRO: We should ignore and make the parenthesized version to be exactly like the version without the parenthesis + +WH: Yes. I understand why you did it. Initially I thought that, yeah, that made sense. But after I thought about it for a while, I found that special casing parentheses in the semantics doesn’t really add anything here. + +RBN: Yeah. For – it’s true we don’t special case parenthesis for simple assignments. By we do special case parenthesis for destructuring there is a precedent in the language for special casing parenthesis on the left-hand side of an assignment + +```javascript +a = { b: x }; +a = ({ b: x }); +( { b: x } = a ); +( ({ b: x }) = a ); // does not work +``` + +WH: Isn’t it just to make the destructuring syntax work? + +RBN: It’s not necessarily just for syntax. I think even KG noted this as well, we in chat, we specifically called out that you shouldn’t be able to use that on the left-hand side of an assignment for restuck structuring. We don’t [sfor] for object or array restructuring currently. + +WH: Not having parentheses there is necessary to make the grammar behave. I see that as more of a syntax thing. Are you aware of any cases where we changed the semantics of an expression’s value based on whether that expression is parenthesized or not? + +RBN: we care about how parenthesis – parenthesis around optional chaining changes the semantics of optional chaining. So we already have precedent for that on the optional chaining side as well. + +WH: The parentheses themselves have no semantic behavior that affects the value. But they break the optional chain. They change the order of precedence, just like parentheses have no intrinsic behavior when you parenthesize an expression such as `a + b * c` into `(a + b) * c` but they choose what parse tree you get, thereby changing what the expression does. + +RBN: I would disagree it has no semantic meaning when it comes to optional chains. Having and not having the parenthesis has no impact on the expression, but parenthesizing for an optional chain does. It is precedent-related but it is has a semantic meaning to that expression + +WH: If we do it that way, then we should encode in the grammar, not in the semantics. + +RBN: It’s encoded in the grammar, but it have a meaning on the end result of the expression. And I am stating that I believe if we already have special case for this within the grammar, that special case should apply as well + +WH: I think we are talking past each other. Parentheses are just a grouping operator. So they choose which grammar production you go down. For assignments you go down the same production in either case. If we want to make them meaningful, we should have separate grammar productions for assignments to something that is parenthesized and assignments to something that is not. + +NRO: I think I could – your position is clear now. And it’s hard to come to a conclusion now. + +WH: Yeah. I understand why one would want to make `(a?.b) = c` illegal, but it’s not easy. This was the minor issue. Let’s get to the bigger issue I’d like to raise: + +WH: `a = b.x = c?.y = d = e = 42;` contains multiple assignments. And the question is, which of these five destinations get the value 42, depending on whether `c` is or is not nullish. + +NRO: So in this case, if c is an object, the behaviour will be identical to not having the question mark. While, if c is nullish, the whole assignment starting from the question mark is skipped, is short-circuited and this will assign undefined to a and b.x and leave d and e unchanged. + +WH: That’s my understanding of the proposed spec as well. This is very concerning because now having a `?.` in the middle of a series of assignments changes the semantics of the whole thing, both retroactively and proactively. So if something in the middle of the series is nullish, none of the destinations get the value 42. I am not sure what to do about this. But I am really concerned that the assignment expression as a whole does not evaluate to the RHS value in this situation. + +NRO: Yeah. I agree this is weird. + +WH: I wonder what, if anything, we can do to avoid this kind of situation? + +NRO: So one of the alternatives to always evaluate the right-hand side. However, like there is – it’s already null, that optional chaining expression produce undefined. These were not only applies to the right-hand value, and like people have to carry other this knowledge from the right-hand side to the left-hand side. + +WH: Yeah. I know why you want to short-circuit it. But it doesn’t fit into the definition of assignment very well because assignment expressions also return the RHS. So I feel uneasy about the consequences of this proposal going through in its current form. + +USA: There is a question from EAO + +EAO: I wanted to check particularly, whether it makes sense to have it now or should we be considering issues about whether to accept this for Stage 1 in the first place + +WH: This is proposed as a candidate for Stage 2 today (the agenda item name is “Optional chaining in assignment LHS for stage 1 or 2”). + +DE: It is going for Stage 1. + +NRO: Yes, it’s good to still know also this is a problem for Stage 2, which we are potentially trying to get today. + +USA: Then we have KG. + +KG: This problem, I agree with, the problem with what to do when you have an assignment expression where it’s one of the chains - the thing WH is pointing out. There is an easy fix, which is strange and it breaks compositionality, which is to say that this form of expression is only legal in statement position which is good 99.5% of uses of assignment. And – yeah. It’s annoying to break compositionality in that way. But it solves this problem. If we think this is useful most of the value for it comes from using it in statement position. So I would at least be fine with that. + +USA: Let’s – you want to respond to that. Next up, we have DE. + +DE: So I am big +1 on this feature for Stage 1. To go through the history of this, the decision to include only the three constructs of property access with that, property access with square brackets and call was largely informed by an analysis of CoffeeScript syntax and how it used optional chaining ([issue](https://github.com/tc39/proposal-optional-chaining/issues/17)). Because CoffeeScript supports optional chaining in many contexts, more than being proposed here. The omission of this particular case of assignment, was in the context of assignment being the next most popular thing among things omitted. It does feel normal to permit this case. When we added optional chaining we didn’t have any strong counterargument for this. It was minimalism, starting small, and we anticipated that gaps would be identified over time. So I think based on the evidence we have collected from the JavaScript ecosystem at this point, it seems great to continue here. I am pretty optimistic particular that we can work out this series of grammatical questions that WH and RBN are are raising + +KG: Getting back to the – the left-hand side expression, being legal is like completely an accident of history. It is relatively straightforward to forbid. You add an error that says assignment expression is an error when the left-hand side is parenthesized expression and, you know, assignment target type is optional chain or whatever. We can make it illegal. + +BSH: So it is currently exceptionion, thrown aside, the case that an equals a normal equal assignment be right-hand is always evaluated. And I know that the tool I work on, definitely assumes this. All over the place. I am sure I am not the only person who works on it, a tool that poses JavaScript that makes similar assumptions. And I also think it’s a problem for human readability. I am not terrible comfortable with changing it. Which this would do. If I guess I would feel more comfortable when the right-hand side always gets evaluated, but my not get assigned. + +EAO: Yeah. What he said. The same concern. + +NRO: Okay. So two things here. One was that like if you consider, for example, computer property access, before introduction chaining the value was always in optional chain changed this using an explicit visual indicator. Also, in this case, there is an explicit syntatic indicator where you have the equal in the same expression you have the assignment. So like any tool can easily do this happening, similar to how when you have a long optional chain with a question mark at the beginning and then like normal property access, you still have to go to the beginning of the chain to see that there was a question mark and so things might not be eliminated. So syntactically located in the same place. + +BSH: Yeah. I don’t find that terribly convincing. Mainly we are now the left-hand side of an operation is changing what the operation does. That’s just really strange. So anyway . . . I would feel more comfortable you when a different assignment operator. But I will letualed more + +WH: Yes. I agree with the concerns for the same reasons. And I would point out that without short-circuiting the parenthesis issue falls away also. + +NRO: Short-circuiting follows like the existing patterns to contain similar beg your pardonior. Not having short circuiting would make this very dangerous refractor from new code to old code + +WH: Perhaps it should have a different form. We might change the syntax to show the assignment could short circuit. Maybe introduce a different assignment operator or do something to indicate that the assignment might short circuit. + +WH: I understand the short-circuiting is desirable. The current syntax is problematic for reasons that several of us stated. + +NRO: I mean this is like already changing the syntax within the assignment. Like, with the assignment expression. The source code cover expression has a question mark. Like making it optional. Just not in the operator. + +EAO: So when looking at the examples of existing code in various repositories that you linked, the sense when reading the current code is that I understand like right away what this code is doing. It’s an if statement followed by an assignment. And it’s really clear to read. This proposed syntax is not. It opens all sorts of questions like what is happening here? And it’s – at the most, it’s saving a few characters and possibly like 2 lines of code. So it seems like it’s optimizing towards code golf rather than legibility. My position is not to advance this. + +MLS: Isn’t it true that in other languages with optional chaining assignment, they don’t evaluate the right-hand side specifically since optional chaining assign is particularly if X is an object, then do the then, if X is not, you don’t do it. That’s basically a rewrite of that? + +NRO: Didn’t optional chaining had similar like – within the same algorithm, where it was mostly used to slightly reduce like to avoid some explicit checks. And also – I forgot the other point. + +MLS: That’s my understanding, how it’s used in other languages. [inaudible] the if check before you do an assignment. I [w*i] means that the right-hand side is not evaluated. + +NRO: My other point was that like this proposals reduce the mental complexity. Member expressions are only dubber you can add the question mark, and the expressions – in some cases but not in other cases. Through the proposal it doesn’t cover all the cases. The argument might seem weak but it covers most common of the missing cases. Like you had explicitly remember that optional chaining has been bound in this position. + +USA: Okay. There is a slight bit of a queue. And you have approximately 2 minutes left on the timebox. + +TKP: Yeah. I am just unsure about the ergonomics because if you eliminate the if, you eliminate the whole branch of your program. And normally you just want to do something with your assigned variable and not just assign it and return something. And if you want to doSomethingWith it, you have to check again and again and again. If this property of this object is assigned or not – if the object is even there. + +NRO: Well, these proposals explicit for the cases where you are assigning and then not directly using it. And like those are already common. It doesn’t cover the like interface case, use case without using the value. But like assigning a property of an object optional, it’s not rare at all. How are we doing with time? + +USA: We are almost on time. A minute or so. LCA? + +LCA: Yeah. I want to respond to the point that [inaudible] this is all about protocol and using a statement. Yes, that is exactly the point. That is also exactly the point for optional chaining on the right-hand side. You have a listening [choin] with multiple question marks in it. This means you have to repeat the member expression multiple times within the if statement and then within the assignment the the whole point of optional chaining is to avoid this. The left-hand side, you have an object that could be null, containing another object that possibly be null containing another object null, you want to assign you will of these exist. Then you now have an if statement that has a member expression and that slightly averager and larger, where the smallest part of the member expression is repeated three times. And then you have the final member expression also present resident right-hand which is exactly the case that you’re trying to solve with the right-hand side of optimal chaining. I don’t see what your point is because if we don’t want this, then we shouldn’t have done optional chaining on the right-hand side either + +EAO: So I am happy with optional chaining on the right-hand side. The sort of multiple level access that you were describing there, where a thing might be there or might be null happens relatively often on the right-hand side. I am not convinced this happens with any such regularity on the left-hand side, where if there is a null at some point, it is very rare that you might want to go a couple levels deep and possibly do an assignment and not have an alternative that you are doing if this is not the case. Such as creating an object, or otherwise going down a completely different branch in your code. I don’t see the multilevel optional chain on the left-hand being as common as on the right-hand side. There it’s increasing clarity, whereas on the left-hand side, it’s changing an invariant about assignment for not enough value in what it’s bringing as a benefit. + +LCA: So maybe this is to – if we get to Stage 1, during Stage 1, find more examples of this occurring, of like real world code that does this multilevel optional chaining on the left-hand side or could benefit from multilevel optional chaining on the left-hand side + +EAO: So what is the request that you are asking for, Stage 1, or still asking for – + +NRO: Like, is Stage 1 because – do we want to do this proposal at all or not. Stage 2, at this point, I don’t plan to ask for it anymore and I didn’t schedule enough time for discussion. I am only going to ask for Stage 1 today. And Stage 1 concern or blocker or do Stage 1 + +EAO: What is the stage 1 question and motivation that you are asking for, is it this that you are presenting right now on the screen or is it something different? + +NRO: Motivation is examples like on the screen. The two examples here. But like there are like just looking at a single [inaudible] easy [inaudible] I can find more examples. And I did not find many cases of nested question marks. Like the [inaudible] it would use a single or I found one of the cases with two question marks but not more than that. + +USA: Okay. Before moving on with the queue, PKA do you mind to have time for this. Great. Next up we have DE. + +DE: Yeah. We’re coming down to the empirical question on two sides. How frequently does this occur? Three examples were presented. That’s not the full picture. And on the other hand, how confusing is this to developers? So we have started down the path of consulting more developers through surveys and such, to understand this question of how confusing these things are. And I wonder if we can investigate both these things during Stage 1, meaning that it’s on the table for investigation. Those are the kinds of things you are raising. Also the more detailed syntax questions, but I think these two concerns are the higher priority ones to investigate. EAO, does this get the things you’re concerned about it? + +EAO: I think so, yes. To clarify what I was asking for, what is the problem space that is being asked to advance to Stage 1? The presentation is clearly providing a specific solution as well for Stage 2, so what is specifically the problem space that is being requested for Stage 1? + +DE: I think it can be good to present concrete solutions at Stage 1. It helps guide understanding what we are talking about. I don’t think it’s always useful to ask “what problem are you trying to solve?”, but it’s good to broaden the area we’re discussing. + +EAO: I understand. Is the problem space being requested for Stage 1 allowing for optional chaining on the left-hand side of an assignment, or is it something else? + +NRO: The Stage 1 problem statement is to simplify cases in code where you are assigning properties to objects at that might be null and doing that and nothing else in the assignment. And like, we have a clear precedent in the language for something like that. But like . . . Stage 1 statement is just like optional assigning to potentially objects. + +EAO: And for that problem statement I have no blocking issues for Stage 1. It’s just this was not clarified up until now what is exactly asked for Stage 1. + +USA: All right. Thank you. Next up we have BSH + +BSH: Hello. So I know it was specifically called out earlier in here, that you were not – that you are saying you are not trying to make the – possible use optional chain in all of the various chain that you can use a left-hand side expression. I have doubts when you define the grammar because I am concerned that that is going to be complicated. And related to that, I would like to – this is frustrating for myself because I realize this isn’t satisfying to tell you, but when I was a year or so ago implementing the right-hand side optional chaining, I remember I came across times, I am glad I don’t have to do this on the left-hand because it would make X, Y, Z harder to do, much harder to get this right and what should be happening here. I don’t – I have not been able to go and dig up my – out of my brain what the cases were now. I remember it happened multiple times. Which is part of my concern here. I'm just not convinced this is a good idea to do. Given the way you just framed the problem statement of just trying to make things – make it easier to do the repetitive statements, I guess based on that, I don’t block for Stage 1. But if you want to – I want to use this optional chain left-hand side, I am going to take a lot more convincing, I’m sorry. + +NRO: So for how to – disallow this, you can for example, change the grammar itself. With regard to implementation, other than optional chaining for read, we already have deleting properties which can change the value of the object. We didn’t have similar complexities. But yeah. I think we are out of time. I would like to ask if we have any like – anyone about object with the investigation of this space, which is again, providing similar – providing good ergonomics to assigning to properties of objects that might be undefined? + +PKA: Explicit support + +NRO: Do we have object to Stage 1. To be clear, I am not asking for Stage 2 today. + +DE: Explicit support + +CM: Explicit support + +USA: It seems like you have stage 1. Congratulations. + +### Summary + +This new proposal explores syntax to optionally assign to properties of variables which might be null. The proposed syntax is a?.b = c, but given that it is an early proposal the syntax could still change. + +There have been three different discussion topics: + +- The proposal treats `a?.b = c` and `(a?.b) = c` differently, with the first one skipping the evaluation of c if a is nullish and the second one always trying to assign it and throwing if it's not possible. This is for symmetry with existing optional chains, such as `a?.b(c)` and `(a?.b)(c)`. However, currently wrapping the LHS of an assignment operator in parenthesis never alters its behavior. Is this something that we need to preserve? +- Should the evaluation of the RHS be short-circuited at all? i.e. should `a?.b = c` evaluate c when a is nullish? The proposal currently short-circuits for similarity with `a?.setB(c)` and `if (a != null) a.b = c`, but this might not be the behavior that users expect. This also affects the result value of `a?.b = c`, since if c is not evaluated `a?.b = c` cannot evaluate to c's value. +- Should the syntax be more explicit regarding the optionality of the assignment, potentially with a different operator? + +### Conclusion + +The proposal has consensus for Stage 1 + +## Stage 2 Proposals Meta Review + +Presenter: Peter Klecha (PKA) + +- [slides](https://docs.google.com/presentation/d/1YyDXM_u7U7c7O23CtR3SVQ0IY-swNHbtI_8HnAM9hXQ/) + +PKA: Yesterday, we did a review of Stage 3 proposals. I think it was well received. So I am going to do it again for Stage 2. Just to give a little bit of preamble here, it was clear for Stage 3 proposals that it is important that champions and implementors be on the same page about where proposals are. The review yesterday was useful to that end, I hope. As for Stage 2 proposals there's maybe less urgency there. Two things I would say on why it would be good to do this review the Stage 2 proposals: + +PKA: People do look at the lists of proposals on our repos. They may not be informed about TC39 and may draw erroneous conclusions on the basis of outdated proposals appearing on those lists. So that's a reason to potentially cull proposals that are inactive or where subsequent activity in the spec have rendered those proposals redundant. + +PKA: The other thing is there may be proposals where members of the committee are really interested in them. But they are not members of the Champion group and not aware that there is something blocking the proposal from proceeding. + +PKA: So here are several proposals we have heard from relatively recently. A larger window that we used yesterday. 12 months or so. Nothing to say about these proposals -- the committee is up to date on them. + +PKA: Now going to go through some proposals that have not been presented recently, but where the Champions are, I think, present. And by the way, if any of these proposals are sort in the wrong column or have the wrong information, I apologize. I threw the slides together quickly. + +PKA: One other thing, if a proposal is actively continuing, or work continues and nothing is blocking we can just say that and move on immediately. There’s no need to go into detail in the interest of getting through everything. + +### `JSON.parseImmutable` + +PKA: The first is `JSON.parseImmutable`. NRO and ACE are Champions for this. Is in anything – did anyone like to say anything about this proposal? + +NRO: Yes. This proposal is dependent on records & tuples don’t expect any progress on this before records & tuples. + +PKA: Makes sense. Thank you. + +### Destructure Private Fields + +PKA: Destructure private fields. I am not sure if JRL is here. On Zoom? + +DE: I don’t know if JRL is on Zoom. But I was going to – I feel responsibility here. While I was on my break, between jobs, some months ago, Justin was going to present this in committee. I saw this and was a little bit uneasy about destructuring private fields being prioritized other than private field features. In particular, once we work out the details for private fields in object literals, we might end up with things looking different from the destructuring private fields proposal. + +DE: So I asked him to hold on in advancing to Stage 3 to work together on that. Since then, we haven’t found time to look into the more general proposals, even as he’s also excited about these other extensions. I would say this is in a phase where we are seeking co-champions for looking into this problem space. There are different ways that a champion could go, pushing through this thing or investigating the broader space. Get in touch on matrix with me or JRL if you are interested. + +PKA: Great. Thank you, DE. + +### RegExp Buffer Boundaries + +PKA: The next proposal is RegExp buffer boundaries, RBN do you want to share a brief update? + +RBN: Same thing that I mentioned yesterday, regarding the reg ex modifiers; this is on my to-do list, but deprioritizing in favor of trying to wrap up the resource management proposal. + +PKA: Great. Thank you for that. + +### Pipeline Operator + +PKA: Next we have pipeline operator. Which I believe has many Champions. RBN? + +RBN: I don’t know if anyone from the proposal is here, this has been sitting fallow for a bit. I think the debate is still going on as to what should be the topic for the pack-style pipeline. I don’t have my finger on where that is right now. I know that discussion is still ongoing and has been no resolution to that. + +PKA: Great. Thank you. + +DE: How should the committee proceed with respect to the pipeline operator? Because this debate question was open years ago. We have changed the kind of candidates where the topics has replaced the previously smart pipeline operator, but it’s been stalled for a long time. + +RBN: I ask that we don’t – maybe we try to bring this up at the next meeting. I would like to make sure the other champions are present to be part of that discussion. + +DE: Sounds good. + +### WeakRef `cleanupSome` + +PKA: Next we have WeakRef `cleanupSome`. DE is champion -- DE would you like to say anything about this? + +DE: I would like to propose withdrawing this. Maybe we should do this at the next meeting, since I didn’t put it on the agenda in advance. This API was motivated by the idea that, for some APIs that accept a callback, if we expect important callers from WebAssembly, we should be giving synchronous versions to be used from WebAssembly, which are restricted to workers.. This is still being done sometimes on the web platform. But there is also [Wasm promise integration](https://github.com/WebAssembly/js-promise-integration/blob/main/proposals/js-promise-integration/Overview.md) which could be a back towards not having to make asynchronous APIs. We omitted this synchronous API because of the hope that such integration or something with coroutines maybe would remove the need for this. + +DE: Overall, I haven’t heard much developer demand for this feature. Has anyone else heard demand for this? + +PFC: Not really demand. But someone did write test262 for it. + +DE: Okay. So how do people feel to . . . + +DE: We may want to wait for YSV to come back + +RPR: I think we may not also have all of the dialed in audience as well. + +DE: Yeah. We will return to this in the future. + +### Function implementation hiding + +PKA: Function implement hiding from MF. MF, are you able to say something briefly about this? + +MF: Yeah. This one, last we presented, was blocked by Mozilla for reasons of incompatibility with a security model they had, and was partly the justification for the creation of TG3: to actually further define what security properties this committee cares about. So I would say that this proposal can’t really advance until TG3 can define that. + +PKA: Okay. Great. Thank you for that. + +### Throw expressions + +PKA: Throw expressions from RBN. 2018. Are you able to share something about about this + +RBN: I want to bring this back and I haven’t – it’s – I actually think I had a solution that would work to make this to alleviate some of the concerns that have been raised about it way back when. But the main sticking point was that why have throw expressions if we will have `do` expressions? So it’s essentially waiting it see what happens with `do` expressions. So once that direction is determined then I can move forward with throw expressions or withdraw. But until then, I can’t really make a decision. + +KG: Just as the – I guess, technical Champion of `do` expressions at the moment, I am not planning on working on `do` expressions in the immediate future. I am also more okay with advancing throw expressions than I was when it was originally discussed. Mostly because my main concern was that there were some chance it might have been the throw expressions might have been part of a larger project to make all statements legal in expression position in some way other than `do` expressions. And I was concerned about if this was going to be a part of a larger vision, then we would want to worry about the larger vision. I think since then it’s been – my impression is that no one has been interested in legalizing all statements in expression position without some wrapper like `do` expression. If it is not intended to be part of a larger thing, I am not as worried about it. So if we can come to a happy resolution on the grammar thing, don’t worry about `do` expressions. + +KG: I believe I was the only person who expressed that concern at the time + +RBN: Yeah. I will say – so the gra – only allow to parenthesized expression. Which would give the right grammar for what would happen on the right-hand side. Because the concern was around I think it was around how comma worked in those cases. + +DE: If I understood correctly, part of KG’s concern was that it would create two different precedences for throw. Baseline, it has to be permitted in statements with extremely low precedence, and then again as an expression with precedence at least higher than `,`. So the concern is that adds complexity to the grammar to have two different precedences. One in the parenthesized case and one in the statement case. + +KG: If it’s in parenthesis, then there’s no precedence issues. You can have comma expression on the right-hand side + +RBN: Towards the concern whether it was a broader goal, the original goal was just throws expressions, and several committee members requested we do an investigation into whether or not we wanted to have or could is a support for statement. And I did my investigation. And my take on the results of that investigation was that it wasn’t necessary. And it wasn’t really something I was interested in. The only thing this came out, was making debugger an expression because it could be, but that’s about it. + +KG: Yeah. Well, we should move on. But I think that to the extent that I was the person who was holding this up in the past, using parenthesis sound like a fine thing to the grammar thing and I am not as worried about rectifying this with do expressions as I was previously, so . . . + +RBN: I am happily bring this back once I have finished up with using. + +DE: Would Kevin, would you be interested in co-champions for the do expression proposal? + +KG: Why isn’t it on this list? + +DE: Because it’s Stage 1 + +KG: I can speak to do expressions briefly. I am not currently actively pursuing it. Because mostly, there’s a ridiculous number of new syntax proposals. And I am less interested in having infinitely many new syntax things and I think there is more value to be had from me pursuing more standard library things. Right now I would prefer to focus my time on that rather than syntax. Do expressions are nice in some ways, but complicated in others. And like, if I had literally infinite time and we didn’t have a bunch of other syntax that we were also doing, I would be pursuing it. Since none of those hold I am not currently pursuing it. + +DE: So I was really impressed by your previous work on do expressions and optimistic about it proceeding. Especially in the context of its value combined with JSX, for example. Would you be open to somebody else working on it, or do you think that you would be skeptical of anybody else pushing it forward and, you know, block that for this sort of syntax overload issue? + +KG: I wouldn’t block it for the syntax overload issue. At the same time, I would caution anyone picking it up that like, I do think that a better use of all of our time would be to focus less on syntax for a while. I am not going to say you can’t spend your time however you want. But that is my opinion. + +DE: Okay. So to summarize, it sounds like you are open to a different champion taking this over; you’re just not, you know, feeling like being a proponent of this. + +KG: Yes, and I wish the committee as a whole would spend less time on syntax. I don’t think the cost benefit works out. + +DE: All right. + +PKA: Okay. Then thanks Ron and Kevin. + +### Other proposals + +PKA: I would like to highlight these 6 proposals. We – I'm personally not aware of – these having champions who are active in the committee. DE? + +DE: Yeah. I proposed `Array.isTemplateObject`. . . I still think it’s a good idea. It’s a small proposal. If people are interested, please let me know. And then that will determine whether I pick this up again. + +DE: For collection normalization, a lot of us are interested in this. Especially in the context of Records and Tuples because part of that is about keys in maps and normalization and about keys in maps. It’s something that ACE is investigating and I think we will be hearing about this soon in committee. Or maybe not soon, but at some point. + +MLS: (from queue) “We could remove "Sequence properties in Unicode property escapes" as it was subsumed by the RegExp V Flag proposal ” + +PKA: I'd just like to highlight that, for the remaining proposals we haven't mentioned, if anybody does want to see the proposals continue, it might behoove you to volunteer to step in and take them over . . . so cool. Thank you, everybody + +USA: I would like to request Champions of all the proposals be talked about today, to take some time and add summary for each of the discussions in the notes. And yeah. Thank you, PKA for the short notice prep of this session. Thank you, PFC for pointing that out. Great. + +### Conclusion + +Ten (10) Stage 2 Proposals have been presented recently and did not need to be discussed: + +- Async Contexts +- Async Iterator Helpers +- Deferred Module Evaluation +- Iterator.range +- Module Expressions +- Module Declarations +- Records & Tuples +- Source Phase Imports +- String Dedent +- Symbol Predicates + +Two (3) proposals were identified as being fully active: + +- RegExp Buffer Boundaries (on RBN's backlog) +- Pipeline Operators (subject to ongoing deliberation within champion group) +- Collection normalization (being worked on by ACE) + +Two (2) proposals were identified as being candidates for withdrawal: + +- WeakRef `cleanupSome` +- Sequence properties in Unicode property escapes + +One (1) proposal was unblocked thanks to participant discussion + +- throw expressions (RBN will continue work on this when he is able to) + +Two (2) proposals were identified as blocked on other committee work + +- JSON.parseImmutable (a dependant of Records & Tuples, which is also Stage 2) +- Function implementation hiding (blocked on work by TG3) + +Two (2) proposals were identified as potentially needing new champion involvement: + +- Destructure private fields +- Array.isTemplateObject + +Three (3) proposals were not discussed in detail and do not appear to have champions who are active in committee; PKA invited anyone who is interested in these proposals to step forward and volunteer as champions: + +- Map.prototype.emplace +- Dynamic Import Host Adjustment +- function.sent metaproperty + +## Reducing wasted effort due to proposal churn + +Presenter: Michael Ficarra (MF) + +- [slides](https://docs.google.com/presentation/d/1V3Fg6HVC-VA41YCu0Yhqynvqhsu5kVj7tiWuVfp8S90/) + +MF: So I am going to start with some background that probably almost everyone in the room is familiar with. But I wanted to go over it again in case and make sure we are on the same page. There may be some slight missing details, please don’t be pedantic about it – we are getting the idea of what the stage process communicates and what happens when. I am going through the whole thing. + +MF: So the process of a proposal starts with somebody identifying a problem space. Stage 0 is an informal stage that we allow people to assign without committee consensus. This signifies somebody had such an idea. During stage 0, that champion researches the use cases related to the idea and creates a document so they can present it for Stage 1. When we accept a proposal for Stage 1, that means we have defined the problem. + +MF: During Stage 1, we look for possible solutions, compare them and get feedback from the community. We eventually will choose one of those solutions to move forward with. And we will start writing spec text for it. When the committee advances that proposal to Stage 2, we are going forward with the solution, not every detail, but the general solution looks like the route we want to take. That signals to the champion to further invest in using this solution as the way to solve the problem. So they work out all those remaining details. Fully finalize the spec text. They’ve also been assigned reviewers when Stage 2 is granted. Those reviewers will give feedback on any technical aspects and the editors make sure it’s written in a way that can eventually integrate with the spec. + +MF: Currently, the most important stage advancement Stage 3. What this signals from the committee is that the details of this proposal are as final as they can get without further feedback from implementations and tests. The committee is recommending at this time point that the proposal be implemented. So what happens during this stage is that the implementation begins. Test262 tests are written. It’s not prescribed who writes them. Just somebody does end up writing them. Once 2 or more implementations ship and the final spec text has been signed off by the editors, they go for Stage 4, which is mostly a formality, meaning that proposal can now go in the draft standard. So the editors are then tasked with merging it. + +MF: So that’s our proposal process. The parts of that process I will be talking about are mostly what happens after Stage 2. + +MF: This graph I have here shows some of the activities that delegates participate in. They are ordered in increasing level of effort. We have the proposal design process which is fairly easy and lightweight to change, as we are working through a proposal. Writing spec text requires a bit more effort, since we have to be precise. Test262 tests often require even more effort, since we need to consider all the combinations of values that need to interact with the proposal, consider all of the paths through all the algorithms in the spec that we're testing. And then when you consider all of the work that goes into all of the different implementations in aggregate, that’s even significantly more effort than testing. + +MF: The thing I am trying to claim today is that if we delay the higher-effort activities until after we’ve completed those lower-effort activities, then we reduce the total effort expended, again in aggregate, among committee members. In other words, we should do these things in order. + +MF: So why don’t we do things in order? I claim it is a process issue. The issue arises when a proposal is Stage 3, and somebody is writing the tests for that proposal, and the tests uncover issues that require changes. This feedback leads to the proposal needing to be changed. But there may be in-progress implementations that also need to change with that update. But we can’t just alter our process by requiring tests before Stage 3 because that leads to a different undesirable scenario, where pre-Stage 3, the committee has not committed to all of the details for the proposal, and if the committee is flip-flopping on the design choices, that requires some possibly-very-large amount of effort to be redone when the tests are updated. + +MF: So my concrete proposal is this: a new stage, after Stage 2 and before Stage 3, where the committee has committed to this design of the proposal, agreeing not to make changes that are not based on feedback from either tests or implementation. Those would be the only reasons to make changes anymore after this point. This allows the implementer of the tests, whoever that is, to do that work without the risk of somebody just changing their mind about the design, and it also – because we don’t yet recommend it for implementation – protects implementers from redoing work, which is more work than updating the Test262 tests. + +MF: So I have a bunch of questions that I predicted people might have about this. First one is: will this slow down the proposal process? Maybe. It doesn’t have to. If a proposal is small, it can still be fast by having tests written before advancing from Stage 2. If those tests are deemed sufficient, it could directly advance to Stage 3. The proposal author is taking on the risk of having to redo the work if further design changes are made. Also, we’re not introducing another point where the committee can, you know, relitigate the design. It’s not like an IETF last call. The advancement from this new stage to Stage 3 will be entirely based on whether the tests are adequate. + +MF: Who writes the test? This is open for discussion. But the champions are already responsible for all of the other parts of the proposal process. You know, collecting data and use cases, interacting with the community. Writing spec text. They should write tests too. We can talk about that, and also people may be willing to help with writing the tests. Ideally, they would be blessed by the champion group to do that work. + +MF: Do the tests need to be perfect? No. We will have fuzzy ideas of what is adequate. And that will be different for different proposals. So more complex, more risky proposals should probably have a higher bar. We also may have some objective measures. I am not proposing we design any of those today. + +MF: What do we do about stage numbers? We have natural numbers right now for our stage process. And as I was saying in one of the earlier sessions, the internal process has leaked to the community and now has compatibility concerns. So I don’t know what to do. But for now, for the purpose of this discussion, if you need to call it something, call it Stage 2 and three-quarters. + +MF: Another question: if we do this, should we apply this retroactively? There’s two possible options here, maybe more. We could reconsider all the current Stage 3 proposals that don’t have sufficient tests or no tests. We have a list of those. At the least, we could consider any proposals that have advanced to Stage 3 at this meeting with no tests to be at this new stage instead. Those are options. There’s probably more options. That’s my presentation. + +DE: So I am a huge supporter of this proposal. MF and I have been chatting about this. I want to emphasize the polyfill and transpiler implementation, and also add engine262 implementation which is also more approachable. I think it’s really valuable to do these kinds of implementations earlier in the proposal process. I think Leo Balter earlier made a proposal that we move such implementation earlier in the stage process. I wouldn’t want to make it a hard requirement, but I am a little discouraged when people say, it doesn’t make sense to do an implementation in one of these, you know, compiled environments yet for testing because it’s not at Stage 3 yet. Some implementations take different amounts of effort. + +DE: Overall the idea here is that we want to get to a point [at Stage 2.75] where the committee has decided on the design, and now we’re trying to satisfy the next objective next criterion [tests for Stage 3]. I think this [making decisions by consensus which are not just major stage advancement] is a direction that we have been moving as a committee in different ways. We have been talking sometimes about how the committee should be able to decide something by consensus, not just a stage advancement because otherwise, we end up kind of shoehorning everything we want to decide into stage advancement, which could lead things to – which might be appropriate in some situations and less than others. So, yes, + 1. + +NRO: Yes so for the example tools, like, Babel often implement proposals before Stage 3. We used to have it in Stage 1 for some proposals. That is a mistake. And like we aware of the risk. We had all kind of flags, like, to switch between different proposal versions. It ended up being a lot of work. But that’s a risk we want to take, to let user test proposals, as soon as possible. And this would not change for us, even if there is this any suggestion about when things should be implemented. + +PFC: I wanted to support this point. I'll point out that if there is an implementation, even an incomplete one, either in an actual JS engine or polyfill, it makes my task as a maintainer of Test262 easier to review a large test pull request for a new proposal. I would encourage – especially if we recommend engines not to start implementing until after the test are merged — that we lean all the more on recommending the proposal champions to implement a polyfill or a transpiler or engine262. + +USA: Right. One thing that the . . . at the cost of repeating one thing come up multiple times, especially in the context of TG2, whenever we go to Stage 3, it’s awkward if there’s no test. With the expectation that people are supposed to implement, there is most implementations wouldn’t be comfortable implementing unless there are tests to test the implementation. So it sort of introduces this awkwardness and lock step that’s not ideal. Using the stage process to resolve this makes a lot of sense. So I strongly support this. + +SYG: I am going to respond to USA here. It’s not true implementations are not willing to implement, if there are no tests. We are used to the default state being no Test262 tests and we write our own tests and that was a motivation for staging in Test262. Hopefully we can contribute to that. Not just our own test suite but the test that the implementers write are different than the ones who is going to get spec coverage will write. There are missing parts. So yeah. Maybe you were at thinking of Chrome’s policy that we don’t ship anything without Test262 tests. For both in the form and JS . . . but just a quick correction I think most implementers would implement without the Test262 tests. Going maintop I object, I support this proposal. The thesis that MF put forth, would in practice, save a lot of time. The – I think to anticipate some of the upcoming questions like, it’s true in the course of implementation sometimes relitigation happens things are discovered via testing or implementation that is a different kind of relitigation than what MF alluded to here in what is supposed to be frozen at Stage 3 and three-quarters, which is independently motivate design changes to the proposal that are not purely reactive based on like implementation difficult or impossibility or bugs that are discovered. It’s already the working mode today that freeze point is Stage 3. And then the change we make after that are supposed to be the reactive changes due to unforeseen things that we couldn’t have foreseen without doing the work of implementation. Michael is saying, and I agree, some of that class of things can be pulled up and discovered without implementation, if tests were written. You are in a different mode of thinking when you are not designing something, but taking something specifically written down and writing tests for it. When you engage at a line by line level, you see different things. And that I think will be strictly helpful. It could in theory add more work. In some pathological cases, for example, a lot of tests were written and all wrong. But you know, that risk exists today. And I trust the champions and the delegates here to make an effort to not do that. So all in all very supportive. Thanks for presenting. + +MLS: I want to re-enforce what SYG said. Implementers do write tests and also correct existing tests and there’s a chicken-egg issue. If you are writing tests in the vacuum of no implementation including a polyfill, then it’s conjecture, I believe both test writers and implementers are looking at the spec many detail line by line and SYG says, when the implement either test or the implementation to make sure that you get things right. I will talk about that further down below. + +USA: Yeah. I wanted to respond to one of the points regards slow down in the process due to this. I feel that if you – consider like one thing we can do is to make sure that the process could accommodate smaller proposals and make sure they pass through quickly. In the case of larger proposals, think about it practically because of the awkward back and forth that needs to happen and people sort of getting lobbed on different things, really, like is it actually making things faster? Because we’re making it sort of more streamlined rather than slowing things down? + +DE: Michael, I don’t know if you want to reply first to that. But I like MLS comment and agree. The cyclical analysis makes sense. It can start with either tests or implementation. But one way to kind of bootstrap that would be to develop a polyfill or transpiler or engine262 in concert with tests maybe developed in in informal mode that. That is developed incrementally. Maybe by the same group. And I think if proposal champions had the bandwidth to do that, it could give a useful resource for implementers. There is a nice base. + +PFC: Okay. I wanted to say that I – in general, I am really positive about this proposal. I like it. I think it probably works more effectively the smaller a proposal is. Having been involved for three years in Temporal now, which is a big proposal, I think the larger the proposal is, the more blur there is between these steps of writing tests, finalizing the design and hearing from implementations. My experience has been that if the proposal is so large, then feedback from the experience of writing the tests can also lead to going back and revising a design. And so can feedback from the implementation. I think in Temporal’s case, if we had this new stage scheme, the proposal would still be in Stage 2 and three-quarters now. And whereas it’s been valuable at this point to already add feedback from implementation. Frank have spotted a lot of things that why news to us in the champion group and wouldn’t have been uncovered necessarily by the test we were writing. So yeah. In general, I support this. But I think probably we should allow for a bit more blurring the larger the proposal is. + +DLM: I agree with the points that SYG and MLS raised. Implementers will write their own tests and they are not the same that are written for Test262, but I still think having good Test262 tests is very helpful for an implementation and the SpiderMonkey team is in favor of this proposal. Or at least having more scrutiny of what test262 tests have been written prior to stage 3. + +WH: So in this proposal, we’re adding a new stage. At which stage will we seek committee consensus to get the reviews from reviewers? + +MF: So on the slides here, we have that right now, advancement from Stage 2 is when that happens, that remains the same. + +WH: Okay. So the reviews will be done – you said at Stage 2? We have Stage 3 — + +MF: When a proposal is Stage 2. These reviews are done in preparation for advancement to this new stage. Today, when a proposal is stage 2, these reviews are done in preparation for advancement to Stage 3. + +MF: If you’re in favor of the new stage being the stage we call Stage 3, we can change the name from what I have here to something else + +WH: Yes. Something we are doing all the time in the committee is advancing things to Stage 3. We seek reviewers for Stage 3. We should keep it that way and have a Stage 3¼ for when tests are done. + +MF: Yeah. I recognized this as a possibility that I considered during the creation of this presentation. The reason why I chose this formulation instead of the formulation you’re suggesting was that I feel like people conceptualize Stage 3 as recommended for implementation widely, not just within the committee, and that was the one I felt would, if changed, cause a lot disruption + +WH: Stage 3 is two things: A signal to implementation. And an approval from the committee that the spec is final. The thing I want to avoid is, when I ask for spec details at Stage 2, I’m commonly told that we will settle these by Stage 3, so I don’t want to end up at the approval meeting for Stage 3 and find out that it’s too late to fix technical problems because the committee is committed to a feature that hasn’t passed approval because things have been frozen at Stage 2¾. + +SYG: Yeah. I recognize that there could be a period of if we adopt the new model, a period of disruption where – yeah. Accidents getting through because people are not used to be the new stage being the final, final Stage 2 stage. But I in general, think it’s easier for use the committee, the 50-odd people to change our mental model of what corresponds to final_final_v2. I agree with Michael in that the community at large, whether that’s Babel plugin, inside look for Stage 3, get ready for general availability. If Stage 3 and they are a lot more consumers of even experimental implementations of proposals by Babel and the engines, then us in this room, and I think if we adopt the new stage, it’s more in our interest to keep the external facing numbers the same. Unless there’s compelling evidence that in fact Stage 3 is not taken that way, but I feel like Stage 3 has taken as like get ready for general availability. It’s less risky to start depending on the Babel plug in and I would rather not be the – not have to try to teach the community at large that that is going to be different. The flags are going to be new, a new number. I am not sure what we gain from that. Eventually, I think us in the room will get used to the new number. + +WH: In your models, the committee would not be advancing anything to Stage 3 during plenaries. Therefore, automatically — + +SYG: I haven’t thought through that detail, whether automatic or not. I don’t think I have too much of a preference. My hope is that for the large number of proposals, you will ask for Stage 3 and this freeze stage at the same time, that it will become hopefully the form people will write the test for the next plenary and then ask for Stage 3. But that’s also a new thing. We don’t automatically grant new stages. So I am uncomfortable saying that should be automatic. But I see the new stage as the signoff stage for design. + +WH: Okay. This muddles things up. What I want is a clear point where the committee signs off on the final design, not something that works just for some proposals. + +SYG: That is the new stage, is my understanding. Other than the number be confusing, which I admit, may happen, I am not sure I understand. Is there another concern, other than the new number being confusing + +DE: I think part of this may come from if Waldemar says, the design decision should be made after Stage 3, that muddles the situation. In my mind, we have been operating for a long time that all known design decisions should be discussed before Stage 3, concluded before Stage 3. And after Stage 3 what we do is go through issues that we discover later due to implementation work. + +DE: So I think we should – I think that there’s a point where the committee makes this objective decision. And then Stage 4, although we advance to Stage 4 in the meeting, is basically based on these objective criteria. So here, we would base Stage 3 on these objectives criteria. And Stage 2 and three-quarters is where we make a judgment call. + +DE: I think this should be clear cut. If anybody tells you “no, you should give your feedback after Stage 2 and three-quarters”, call them on it and say “no, it has to be handled now”. This is the decision point. + +DE: As far as when exactly editor reviews happen, I would be okay either way, to precede Stage 2.75 or Stage 3. There’s going to be a Stage 4 editor review anyway, so it doesn’t make a big difference. I don’t think the editor review tends to affect the tests so much. I’d prefer requiring editor review for 2.75, but maybe we could require it for both transitions? Anyway, the idea is that Stage 3 stops being an important decision point. And instead, it’s the new stage that is the decision point. + +WH: This muddles things up even more. + +DE: How so? + +WH: Well, because now I am not sure whether Daniel is saying reviews would happen at a new stage or at Stage 3 and I am hearing both variants. + +DE: Oh. That’s the least important part of it – + +WH: This is very important. + +DE: So I agree with what Michael said, and I think this makes a coherent model altogether because Stage 2.75 is the part where do the reviews and drawing a conclusion + +RPR: Okay. Dan, what you are saying to answer the question directly, is that the reviews happens as entrance criteria to 2.75. + +DE: For greater integrity, an extra review before stage 3, if there are any intervening changes, then yes, a confirming review makes sense. + +SYG: Let’s not talk about the – concretely, what this proposal is, is for nonimplementers and nontest writers, everything that you used to do for Stage 3 now happens before the new stage. If you do not write tests and are not an implementer, nothing champions except the number to care about. It’s changed from 3 to 2 and three-quarters. + +DE: Yeah. So effectively, we are setting a higher bar for Stage 3 so that people can have greater confidence in Stage 3. + +MLS: Stage 4 is when things are final. I want to make sure that we as a committee understand that. We have a lot of things – that have gone back to Stage 2 or some kind of – a lot of normative changes for Stage 3 proposals. Get ready, I agree, Stage 4 are when things are final. + +NRO: Yes. I give it, like, moving requirements are like across different stages when like changing name and like moving like requirements in your name, can be confusing. And like MF said that our internal process ‘leaked’ the meaning for Stage 3. And the meaning that people gives you 3 is that the committee consider proposal to be in a good shape and unless some – some concern is brought up, like during Stage 3, during the implementation, it’s considered to be fine. Incompetent implementation or the test because that already happened. And what we are doing now is we are automatically saying we have a new stage which comes before Stage 3, for everyone outside of this committee, because like most of us implementations or tests . . . they will have to rename mental tally Stage 3 to a new stage. This is like – we are tying to improve our process, making things more clear. Like giving an order to Stage 3. And we are like leaking internal factoring to have we communicate with the rest of the community. And yeah. + +DE: I think the community assigns an understanding of a certain amount of stability to Stage 3. The purpose of this process proposal is to encourage people to refrain from assigning that understanding of stability until tests are written. I think this could be a stabilizing step for the JavaScript ecosystem. So the goal is to encourage people to not renumber things. That’s the motivation for this numbering. + +NRO: Okay. This like – we discussed about splitting Stage 3. And moving things around with respect to ship, even if it doesn’t change, doesn’t anything about our process. It was just like that as much as Stage 3 should be stable or changes could happen so we had like we discussed about giving this signal implementers, saying. Okay. This is now stable. Like, for real. And again, this is like splitting Stage 3 like again with this . . . is just like a game, giving a different point after we already consider proposal to be like close to be done. And like tests – writing tests doesn’t point to the proposal being significantly more safe from changes. + +MF: I would argue that there’s empirical evidence that, after writing tests, proposals are more stable. I can point to many examples of that, if you would like. + +SYG: Yeah. I want to + 1 MF here. It is true that things written after tests are more stable. There are small normative things that happen that, yeah. That they just happen and could happen ahead of time. + +RPR: Okay. I will just say we have 4 minutes left on this topic. And we have got very big queue. + +MLS: So I want to point out step serialization will slow things down. Except for the very smallest proposals, we’ll probably require another plenary meeting before Stage 3, the current Stage 3, to Stage 3. And as an implementer, I don’t think -- I think we would be less likely to implement something if we have -- if it’s not in the current Stage 3 where we rest or whatever. I think this -- there’s already kind of an implicit -- mostly implicit ordering that we have tests before we implement. Now, saying that, quite often, the test262 tests are quite inferior. I recently implemented something where there was about, I don’t know, I’ll just say there’s maybe ten syntax tests. When all is said and done, I had close to 200 syntax tests that I wrote before we shipped the feature. So we have to be very careful that we may be slowing things down not only because of process, but also slowing things down because the test that we’re now implicitly requiring -- there’s the test written stage that implementers may slow down their implementation. + +EAO: So if you look at this from the point of view of Stage 3, the thing that has changed, if this is accepted, is that the test262 tests are written earlier. And everything else effectively stays the same after approval for Stage 3 has been reached. So I get that. That sounds like a good idea. What I’m not really getting is why do we need a new stage explicitly for these tests, which, as presented here, are primarily an action driven by the champions. And I see that you do cover this a little bit in the presentation, but the sense I get is that we’re doing this in order to make it easier for issues that arise from writing the tests to have an earlier impact on the spec text and on the implementation itself. So I get that part. But I don’t think that there’s necessarily enough of a reason to add a completely new stage in here rather than just moving the test262 coverage to be a requirement for Stage 3 advancement. + +MF: So it is expensive to write tests. I recently wrote iterator helper tests. I wrote about 350 to 400 tests. It took me a couple of months, like two months of the time that I can spend on TC39 to write those tests. But iterator helpers was at Stage 3 at that point, so I had the confidence that the work I was doing was not going to have to be redone. For proposals of that size, I cannot make that kind of a commitment until the committee has committed to me that the design is what we want it to be. So that’s the need for this stage. Some people were calling it like the frozen stage. That might help you better understand why we need it. + +RPR: So +1 from LCA + +LCA: I want to comments on that real quick. Can you go back to the slide of -- yes, that one, exactly. The -- so I think this makes sense under the assumption that at -- when you reach this new stage, you already have signoff from the committee that things are final. And, like, if this is the case and I guess this goes back to the question of where do we do reviews. If reviews happen before this stage, this makes sense. If reviews happen later at Stage 3, then we’re not quite sure things are final yet, doing this test development. So, like, I don’t know. And then, like, do things that we uncover during writing these tests, would they have to be -- like, go through normative approval in the committee again prior to going to Stage 3 or is this something that can sort of be applied to the spec? + +MF: My thoughts was that similar to how it works today, when we find issues in Stage 3 proposals through tests, we would bring them back as agenda items for committee approval each individually. + +RPR: We are at time. The queue is quite large. + +MF: Do we have time for an extension? + +RPR: We’d be going into the break. The break must finish at 25 past, so we’ve got -- we’ve got 20 minutes between now and then. I think at most, we’ve got time for five more minutes, but that should include your summary. + +MF: Okay. Let’s do two more minutes and I’ll try to summarize and do next steps. + +SYG: I sense some confusion on what the proposal actually is. My understanding of the proposal, correct me if I’m wrong, Michael, to be -- I’ll try to be as concrete and explicit as I can, take the current model. The current model -- what’s proposed is that we take the current model as it is and in the current stage numbering, all this stuff, the reviews happen before Stage 3. Test262 happens some time between Stage 3 and 4. What is being asked is that the stages -- sorry, that the Stage 3 stuff, the reviews, that all still must happen, and then test262 happens, and then the implementation stuff happens, and then we rejigger the numbers. The reviews are not going to be delayed. Not till after the tests. That doesn’t make sense. It’s -- I understand this proposal to be just about separating the two signals that are conflated into Stage 3 right now, which is one implement, and two, design is finalized. That there is additional work that could help ease things if we give separately the signals, design is finalized and ready for implementation. The most -- most of the Stage 3 stuff is about the design is finalized for us, for people in committee, the thing we care about is design is finalized. So all that weight is shifted to the new stage. It’s not going to be happening after tests are written, because, yeah, that would be nonsensical. + +CDA: All right, we are at time., I will capture the queue. Do you want to dictate a summary for the notes? + +### Summary + +Generally positive feedback from the committee that this will help reduce costs effectively, and should be continued to pursue. Naming and numbering still has not made any progress. More feedback is requested offline or online, between now and the next time we talk about this. + +## DataView get/set Uint8Clamped methods for stage 1 or 2 or 3 + +Presenter: Jordan Harband (JHD) + +- [proposal](https://ljharb.github.io/proposal-dataview-get-set-uint8c/) +- no slides presented + +JHD: It would be great if someone was able to present. Just the link to the proposal repo would be great. + +JHD: Thanks. Cool, and then also if you could pull up the proposal repo. Thank you. + +JHD: All right. So I was writing some code. I was trying to accept all kinds of TypedArrays and to dynamically dispatch on which kind of TypedArray it was, and I wanted the ability to set little and BigInt and I reached for the DataView get and set methods and noticed that one of them was missing. I made a helpful little chart showing the inconsistency. If you can scroll down now, Chris. Yeah, so that’s it. So basically seems like these were just missing and it would be nice to add them. In particular -- in particular, the set method, because that includes the clamping behavior that -- so in the -- that in the absence of it, I have to manually reimplement. The get method is exactly the same as the one for Uint 8 array, so it could even be an alias to it or something, but just for consistency, it seems nice to have them all. `Uint8ClampedArray` is certainly weird and for canvas-based applications, but it’s certainly weird to have the omission. You can go to the spec part now. It’s very simple. It’s just these four lines of spec text could be condensed to two, if you wanted to be really concise. Just because of the -- you know, this is the way they’re all implemented, you just have to pass the spec internal type Uint 8 clamp. I’d like to close the inconsistency and add some methods and there you go. I see a queue topic from Michael. + +MF: I think that the getter just isn’t motivated, and I understand the desire for consistency, but I don’t think that’s strong enough motivation. So I don’t think we should do the getter. If we find some motivation later, we could do it in the future, so there’s nothing preventing us from doing that. I’m okay with the setter going alone. I think that should be fine. + +JHD: Yeah, I mean, I’m content -- well, I would say either -- even just adding the set method is still a consistency increase. I would like it, so if that’s the case, so be it. But if the -- yeah, I guess I’m wondering what the cost is of adding the get method. Especially if it’s an alias, so implementations wouldn’t even have to make a new function. JHD: Michael, did you have any thoughts about that? + +MF: I don’t know if I’m qualified to answer that. I imagine implementers might consider it to be a cost. But -- + +JHD: What’s a cost for you, since you don’t think it’s sufficiently motivated, so I’m wondering, what’s the downside to having it, from your opinion? + +MF: There would be a -- an additional method that, like, developers can see exists and not know why they might choose it over the alternative that should do the same thing, leading to confusion. Or just an additional entry in autocomplete for things that people don’t need, not just like some people need, no person, like, not one person. It doesn’t seem worth it. + +JHD: I mean, okay, I just explained why I need it. I agree it may be very, very rare. Okay. + +MF: Was your explanation for, like, programmatic access? That’s what you mean, computed property access? + +JHD: Mm-hmm, yes. I don’t want to have a special case in my code for one TypedArray type. + +MF: Yeah, I’m still unconvinced by that. Not that I’m opposed to adding it. I just don’t think that’s sufficient yet. If other people -- + +JHD: No, yeah, that’s fine. I’m just trying to understand your position. Thank you. + +PFC: Is there anyone around from the time when DataView was originally added who knows why the inconsistency was left in the first place? That might be useful to know. + +RPR: I don’t think anyone’s answering on the history question. So on to Dan Minor. + +DLM: So we discussed this internally, and it doesn’t seem super useful to us, but if we do agree that it fixes an inconsistency, so we would be ok with stages 1 or 2 for this one. + +RPR: Dan? + +DE: About the history, I wasn’t there, but it wasn’t considered a kind of meaningful thing to have the clamped operation on. The -- I think the `Uint8ClampedArray` is kind of a hack and we shouldn’t really be building more things on it. So I agree with MF’s point. But I don’t understand the motivation for this yet. + +RPR: And Michael Saboff? + +MLS: Jordan, you have checked to see what current engines do when you create such a view and then try to get a value? + +JHD: I’m not sure what you mean. Like, the -- they all -- they all work fine. It’s just that those two data view methods don’t exist, so they’re -- I can dynamically dispatch -- like, I can have a special case in my code, that’s what I have to do currently, that says if this is a +`Uint8ClampedArray` for a get, do a U8 get instead, and I have a similar condition in the set path, but in the set path, I say if this is `Uint8ClampedArray`, I have to manually clamp the value and then I set Uint 8, because the DataView methods are the only way you can set or get a value with an NDS that is different from the underlying system. + +MLS: The reason I am ask, is because I’m looking at our code, I don’t know if it’s fully plumbed, but we have clamping functions in what we call our Uint8 clamped adapter that is used to make the view. We’re doing it internally. + +JHD: Yeah, I mean, so because you can set a value directly into a `Uint8ClampedArray` with property, like, bracket zero equals, the clamping logic must already be in every engine to be compliant. It’s just that it’s not connected directly exposed as a DataView method and to ability to specify the NDNS. + +MLS: I haven’t looked at the plumbing all the way through, but it shouldn’t be too difficult to add. + +JHD: That’s my expectations, so I’m glad that’s yours as well. Dan, you had said something that you don’t understand my motivation. Is there more that I can explain to help you understand? + +DE: My understanding is that the motivation is about fixing the inconsistency, about filling out the grid and that any utilities you expose for TypedArrays and DataViews, that, you know, kind of map and be fully expressive. I guess I would want to hear -- please correct me if I’m misunderstanding, I guess I’m wondering what the next step is. Why would somebody want to call your library in a way where these methods are used? + +JHD: Yeah, I honestly don’t have an answer to that question. + +DE: Okay. + +JHD: I certainly don’t use Uint8ClampedArray directly myself. I’m simply wanting to make sure my utility supports all TypedArrays in its code path. + +DE: So that -- that’s valid, I’m glad you’re being honest about this. I think this is the kind of things that we should have answered before going to Stage 2. + +JHD: Okay. For both the get and the set? + +DE: Honestly, one of them would be enough, for me. + +JHD: Okay. + +DE: Just clamps DataView stuff at all. + +JHD: Okay. All right. The queue is empty. So it sounds like I have support for Stage 1. + +KG: Well, you have heard that no one objects to Stage 1. + +JHD: Right. + +KG: We have a relatively new requirement that we said we were going to try to do that to advance any proposal needs at least one explicit second. So I would like at least one explicit second, that someone else is in favor of this advancing. I’m not opposed. But, like, if you are literally the only person who thinks this should -- who actively thinks this should advance, I think that is probably not enough. + +RPR: Well, let’s check to -- let’s see what Chip says. + +CM: Yeah, I support this. Just seeing those two red Xs in the big -- and green dots just makes me squirm. + +MLS: And I support it because there’s a lot of graphic usages for something like this. + +DE: Could you elaborate. What use do you see? + +MLS: (inaudible) for image processing, so, yeah, there’s a lot of -- you know, there’s 4 bit and stuff like that. I don’t want to get into a SIMD discussion, but in clamp arithmetic, in clamp arrays makes sense. + +WH: I also support this. + +RPR: All right. Jordan, you have lots of support. + +MF: MLS, can I get a clarification. So you’re supporting the setter, right? You have explicit support for setter, but not necessarily the getter? I don’t know if you heard me. + +MF: It’s the same thing as the unclamped one. + +MLS: Yeah, I don’t think we should, you know, leave that red check mark or X in there. It’s -- you know, a setter and getter on clamped -- clamped arrays, you know, I think both of them could be supported. + +NRO: Yeah, I think it’s okay to serve this, but just to clarify that just already has arrays, and DataView methods are specifically when it comes to Uint8, it’s a single type, these methods are only used for when you have mixed type of data in a single array, and that doesn’t happen much when working with graphic stuff. + +MLS: Agree. + +RPR: All right, we’ve got one minute left, so, Jordan, it’s probably worth explicitly asking for stage advancement. + +JHD: Okay. So I’d like to explicitly ask for Stage 1, firstly. I’ve heard some explicit support, so sounds like no one -- and no one said they want to block that. + +DE: Yes, to be explicit, I am not blocking Stage 1. + +JHD: Right. So I’ve heard that. And then it sounds like, just to consider this, like, the summary, so it sounds like I have Stage 1, and in order to get Stage 2, I was asked to provide explicit motivation for why somebody would be -- not why I need this in my utility code, but why somebody would be calling my utility code with a `Uint8ClampedArray` with this, and then the other thing, I need to provide is better motivation for the getter. + +JHD: Does that sound like I’ve heard all the input? + +DE: Yeah, that -- that sounds good. I think on summary, you can also include the ideas about use cases and the main points of the discussion there. It would be good. + +JHD: Okay. + +RPR: Okay, yeah, so, please review that summary in the notes, Jordan. + +JHD: Okay. + +RPR: Congratulations, you have Stage 1. + +JHD: Thank you. + +RPR: I’m all for the applause. Please do remind us if we forget. All right. Next up is Kevin Gibbons, who wants us to stop coercing things. + +### Summary and conclusion + +Detailed discussion were had and Stage 1 was achieved. + +## Stop Coercing Things + +Presenter: Kevin Gibbons (KG) + +- [slides](https://docs.google.com/presentation/d/1m5R5J98W6adegghgkAlbSuFgAYJDT52yyFVdAqLjm00/edit) + +KG: So I should say before we get started, this proposal -- well, it’s not a proposal per se. It wouldn’t affect -- it wouldn’t add anything into the language. I am just suggesting that we change the design principles that we have going forward. I’m going to enumerate a number of distinct things that I think we should change, all on the same theme. For the purposes of organizing the discussion, I’m hoping to first get buy in on the general project, and then discuss the specific proposals in turn. So I know that some of them will be more controversial than others, and I don’t want to spend all of our time on the controversial ones before we get time to talk through the earlier ones. So this might require some jumping around in the queue. Okay, that said, please contemplate this piece of code. I claim it is confusing and bad. If you don’t know, this gives you a, this gives you the first element of the array. This would of course also give you the first element of the array if you passed the string “end” or an object literal or any other number of ridiculous things. This is because the general philosophy in the language is to try to coerce arguments to the appropriate type and the type of the argument for at is integral number. So the coercion works by taking the argument, coercing it to a number, of course that gives you NaN, and then coercing that to an integral number. And if you coerce `NaN` to an integral number, you get zero. This is just a fancy way of doing `.at(0)`. I think this is bad. We don’t have to keep doing this. It is the precedent. It is how we have always done things. Precedent is extremely compelling, in general. But for sufficiently bad ideas, we can break with precedent, and I -- my position is that this is a sufficiently bad idea. In particular, passing something of the wrong type is almost always going to be a bug, like, almost always. When it’s not, it’s going to be confusing for readers. And bugs should be loud, not quiet. You should not get the wrong answer. You should get an error. Always, always you want this. + +KG: So I will have a number of concrete suggestions for what I mean by stop coercing things. We don’t have to take any of them in particular. I know that certainly some of them will be more controversial than over others, and I want to make sure that I’m not proposing these be hard and fast rules just that they be the starting point for the design for anything new. And in particular, if you are making a new proposal and you want to deviate from something that we agree on in this presentation, that should be something that you come to the committee and you say here is why I think it makes sense to be different in that case. In the past, the default has been to do coercion. So someone would have need to come to the committee and say this is why I don’t think it makes sense to coerce, which has been the case, but I want the default to be not coercing, so when you want to do coercing, we should have to make that case explicitly to the committee. I have a bunch of concrete proposals. I think some of these are controversial, others are less. I’m going to run through these fairly quickly and then open it up for discussion first of the general principle and then of each of these specific topics. Hopefully in approximate order. So I’m just going to dive into these sort of particular cases that I would like us to change. + +KG: The first is stop treating NaN as zero. This is, I think, ridiculous, and in particular, we have already started doing this. So in iterator helpers, in the take and drop methods, in temporals duration method, and in the Stage 2 iterator.range proposal, we have made the decision to treat NaN and anything that coerces to NaN as a rangeError rather than coercing to zero, as it does for other integer places in the specification. This slide has some examples of code in the language today that does do this coercion. Of course I am not proposing to change any of these. I just think that all of these are ridiculous and I would like new code to not have the behavior of the code on scene, despite the Inconsistency. + +KG: Second, don’t coerce undefined to anything else. If there’s a required argument, and the type of that argument is not, like, something that you could reasonably pass undefined to, you should get a type error. You shouldn’t coerce undefined to a string. This is, like, 50% consistent with the web platform. Web IDL APIs throw if you pass them too few arguments. But if you explicitly pass an `undefined`, they will generally coerce that `undefined` to a string or whatever. So this isn’t 100% consistent with the web platform, we would be going stronger than the web platform, or at least stronger than existing APIs on the web platform. My hope would be to change web IDL so in the future they would be consistent with this full principle, so as much undefined is treated as missing and both of those are errors. Here are some examples of code today that, again, not proposing the change the behaviour of this code, but I think these are all silly. If you call something with too few arguments or you pass the property of an object and that property happens to be missing, maybe you made a typo, you shouldn’t, like, get an answer. If you call parseInt and you pass it undefined, it shouldn’t try to parse the string undefined as a number. This is silly. + +KG: And a more general version of that is don’t coerce between primitive types in general. If the user wants to pass a number and they have a string, they can coerce the number to a string themselves. We shouldn’t do it for them. Of course, with the exception that when you have an optional parameter that what has a sensible default value, then `undefined` is a reasonable thing to pass there, to mean I want the default value for the parameter, but that’s different than coercing. Some examples today, if you call `parseInt()` and call a `null`, a literal null, it will attempt to pass that as a number in whatever base you specify. + +KG: The `Math.max()` number is a little more subtle. It’s something that you might think is reasonable, but if you think about it a little more, getting a value out of math.max that is not the same as one of the values you put in is weird. Like, max is generally considered to be give me the -- one of these which is largest, but it doesn’t. It coerces them to numbers and then gives you the largest number after coercion, which is just never going to be the thing you want, or at least never the thing I want. + +KG: Similarly, there’s APIs that take integral numbers and right now we round, or to be more precise, we truncate. So that’s not true universally, but it’s true almost universally. There are, to my knowledge, two places in the language right now that don’t do this, which is the array constructor and the magic length property on array instances. In both of those instance, if you pass a non-integral number, you will get a range error. But everywhere else, for example, all of these things, it will truncate. So `Float64Array(1.5)`, the `Float64Array()` constructor differs from the array constructor and that the float 64 array constructor will truncate. It will not give you, you know, a 12 byte array or whatever. And I think a range error would be more appropriate in these cases. The temporal duration constructor in Stage 3, also has this behavior of throwing on integral numbers, although in their case, it’s kind of necessary because 1.5 seconds is a totally reasonable unit and it should not give you a one second duration. + +KG: Okay, and then the last two are perhaps the most controversial of all. We could just not coerce objects. We could just not invoke the toString or valueOf methods or the Symbol.toPrimitive methods. Just stop doing it. Like, if the user has, I don’t know, a URL object they want to pass to a string taking method, they can coerce it to a string. It’s not hard. It’s probably going to be clearer for readers. Just do the explicit coercion if you want the coercion. And then here's some examples I think are particularly silly. If you know, try to join an array by an object, you will join with the famed [object Object]. If you try to pad a string with a function, it will start padding the string with characters from the stringification of that function, which, like, this just -- it’s just very silly. + +KG: And if we can’t do objects, at least we can do arrays. I think we can agree that while there are some objects that have reasonable toString behaviors, arrays are not like objects in general. They are, like, a very particular kind of object whose toString and valueOf are not generally supposed to be overridden and are not generally sensible things to use when passing to a string taking function. + +KG: The `Math.max([])`/`Math.max([12])` example are my favorite. The first will give you zero, which you might think is reasonable, although the actual thing you would want is negative infinity. Similarly, you can pass an array containing a single element to `Math.max()` and it will give you that element. Of course, it breaks as soon as you pass an array containing two elements. But it’s kind of subtle what’s going on there. Similarly, like, if you try to construct an array buffer and you pass a single element array, it will create an array buffer, because it will pass that array to a number, which goes via casting toString and the stringification of that array is the string "12" and the numeric version of that is the number 12, and the array buffer expects the number, so now it’s making a length 12 array buffer. I think while there’s at least some case for coercing objects, there’s no case for coercing arrays. + +KG: There’s my not even a little bit modest proposal. So I imagine, yes, we do have a bunch of stuff on the queue. So like I said, I’d like to start with more general topics about coercion in general before we get to any of these concrete things. So let’s get to it. I guess Jesse is first. + +JMN: I was just wondering whether we have any data about these coercions in the wild. I mean, we sitting here in this room are more sensitive to these things, and I think they strike us as odd, but I wonder if these really show up out there. + +KG: I have no data. I also don’t know what the value of that data would be. Like, what would be -- what would we be trying to learn from that? + +JMN: Right, I also don’t know what the value of that would be directly. Just trying to make some kind of data driven decision here. + +KG: The thing is that I can’t imagine an answer to that question that would actually inform a decision that I would make here. If it’s happening a lot, that suggests that a lot of people have errors, so we should forbid it. If it’s not happening a lot, that suggests that, like, it’s not a behavior that is worth relying on, so we should forbid it. There’s no answer to that question which would affect what I want to do here. + +RPR: Shu? + +SYG: I don’t have metrics. I think there’s anecdotes about some coercions being security issues. The object to primitives thing run user code, which is extremely surprising, and I don’t think anyone actively, like, legitimately uses that, but I have no data for that. But I’m pretty sure there are exploits that, you know, it’s a fruitful avenue of exploration usually to see if the engine forget to revalidate stuff after some point because, you know, they forgot to check -- the itch. Forgot that this can run arbitrary code because of value in a `toString`, and that’s a problem. + +RPR: Chris? + +CDA: Yeah, you have a lot of examples here of some pretty obviously silly coercion results, but the one that I think goes a little too far is on coercion of primitives. It seems heavy-handed if some primitives that satisfy loose equality we would throw. So, for example, in your `.at` example with `[‘a’, ‘b’, ‘c’].at(‘start’)`, but it sounds like if I’m passing a string, which is an integer, you would want that to throw as well, and I think that that’s a little bit too much for me. + +KG: Can you say more about why. I just can’t imagine why you’d want that. + +CDA: Usually I see this in serialization layer, so, for example, we’re getting back some JSON where somebody is sending a string instead of a number. Would be annoyed if I had to coerce that myself. + +KG: But, like, if I were reviewing this code, and I was like, oh, but the thing that this API return, this string, and you’re passing it to an API which takes a number, I’m going to be confused by the code that you run, so like as a person reading the code, I want you to do that coercion explicitly. + +CDA: Yeah, I mean, well, in -- I mean, in the example here - ['a','b','c'].at('2') - I’ve literally hard coded a string, but that would presumably be an object property. Maybe we don’t have control of some aspects of it. I don’t completely disagree with you, but I think it’s okay in this example to accept, you know, something that cleanly can be parsed as an integer I think should be okay. + +KG:: How do you feel about true and false coercing to 1 and 0? + +CDA: Oh, I think that’s always a fun one. We get in a lot of trouble with the truthiness coercion. So, I don’t want to paint with too wide of a brush here, because, again, I think a lot of these cases are -- I agree with stop coercing in many of these cases. But something a little more straightforward I think should be allowed. + +KG: All right. Well, I see we have more things on the general topic, so let’s come back to the concrete primitives after getting through some more general stuff. + +RPR: Nicolo? + +NRO: Yes, Ken already asked this question, why would you want this to work, so let’s just skip it. + +RPR: And Daniel? + +DRR: Yeah. I mean, there was a point about like sometimes you want the primitive coercion to work, and I don’t think string to numbers typically the one that I want, but typically what you’ll have is something like oh, this takes a certain unit, but really it’s like round tripped back as a string, right? And so internally it becomes a string or something like that, but you might want to be able to just pass in a simple primitive and then just say, yeah, it turns into a string. That said, for most API, I would prefer not to do that sort of coercion. + +RPR: Shane? + +KG: Actually, can we come back to this particular one late, because it sounds like it’s talking specifically about 5 or 5A and not the general topic of coercion. + +RPR: You mean topic 5? + +KG: No, I mean Shane’s item is, yeah, number 5 rather than coercion in general. + +RPR: Is that okay, Shane? Okay. Michael? + +MLS: So if I was to design JavaScript from scratch, I would agree with every one of these rules that you have. Unfortunately, we have history and developers do a bunch of different things. I’m wondering if at some point some APIs don’t have autocoercion, if that would confuse developers, because they have a full expectation of, yes, you can use a string that coerces to a number to do stuff. And now, some of the examples you give, I totally agree, but others I could see that people would use it even if we in the room don’t think they would. + +KG: Yeah, so I think this the strongest reason not to do this. My position is that, yes, some people would definitely be confused. Some fraction of developers have internalized that coercion is just going to happen for everything and will be surprised if they can no longer rely on it. I think that’s actually going to be outweighed by all of the develop dollars don’t know coercion is happening or don’t understand its rules, and so upon encountering code written by a developer who is relying on coercion are going to be confused. So I think that there is already confusion inherent in the current system. It’s just that the thing that the language does is confusing and people relying on it is confusing, and so if we can move to a world in which people don’t rely on it, then there will be no confusion. The developers who previously were relying on it will stop doing so. They will be confused, perhaps, and have to learn that they need to stop doing so, but then they do sdo stop doing it, and then no one ever has to know that there’s this inconsistency, because if you just never do it, you don’t run into the inconsistency. + +MLS: I think you would agree that we can’t do this to existing primitives, or existing APIs? + +KG: Yes, absolutely. + +MLS: We have to grandfather a whole class of coercions that have been supported for eons and then we have new that don’t have it, and I think that that’s -- that will continue to be a confusing aspect of the language. + +KG: I agree that that would continue to be confusing to the extent that people were learning that some things did coercion and then relying on it. My hope is that people just wouldn’t, that people would learn, oh, this takes a thing of a particular type, and the only way they would run into it is if they had an error such that there is something that ought to have given them an error, according to one of these rules, and they didn’t get an error, and then they will be surprised by it. But that’s already the case for developers who haven’t learned that everything does coercion, they are often surprised when they pass, you know, an object that has a missing property. Like, they typo a property and an object and then they don’t get an error. They are already surprised. So I think the surprise is inherent and the inconsistency that we would be introducing doesn’t make it particularly worse. Maybe it makes it a little worse, but makes it better in other ways that I consider to outweigh it. + +MLS: Okay, and I think that’s a judgment thing that the committee would have to decide. + +KG: Absolutely. + +RPR: Shu? + +SYG: I agree that this is -- that is a concern, speaking as a supporter of general coercion, our sphere of influence is just 262, so not only would we grandfathering in our own historical APIs, what are your thoughts on, you know, web API stuff and web IDL and Node APIs? Are we going to try to affect a larger change or are you content with if we can get this as a design principle for new 262 things that it will be enough impact that it’s worth the inconsistency, not just at a lock point in time for us, but also with possible future things for other standards bodies? + +KG: Yeah, so I am -- I would be content if we just did it in the language, but certainly my hope would be we do this elsewhere. I know, for example, that DD said he would like web IDL to treat undefined and missing arguments identically. Right now there’s this inconsistency between the web and JavaScript in that JavaScript treats missing arguments as being undefined and coerces undefined and web IDL throws on missing arguments and coerce is undefined. So if we could -- my hope would be to move web IDL to also agree with these principles in general. But, you know, maybe we wouldn’t precisely agree since we already don’t precisely agree, I wouldn’t regard that as fatal. + +SYG: Yeah, agreed. Happy to hear that it’s in scope for you. Someone’s got to start it somewhere, so TC39 is as good as any. I’d like to see that the scope be ambitious if we do this. So thanks for taking this. + +KG: For sure. + +RPR: Dan? + +DE: MLS made a very good point that we have this tradeoff about both existing versus future developer mental models, developer mental model’s already been developing, and different APIs having different conventions. I think this will be the major thing for us to weigh, and that’s just going to be a process for us to make this judgment as we learn more about this space, so I’m glad that KG is bringing this as a specific topic rather than sort of making ad hoc Decisions on particular proposals, as he noted we’ve been doing so far. + +RPR: Jordan? + +JHD: And I think I’ve seen node make a lot of changes away from coercion basically following the spirit of Kevin’s presentation here. Yeah. I have actually been -- I’ve run into it in a hard way where I was depending on a tool that was relying on the coercion and node 20 broke it, and so I have to, you know, get that tool to upgrade to no longer pass a coercible object into the node API. And I think that although that’s annoying for me personally in this case, that’s spiritually the right move to make, because as Kevin indicates, I agree that coercion is almost always masking a bug. + +RPR: Shane? + +SFC: Just noting that, like, this is the type of thing that I certainly would hope that TypeScript can help find. Like, you get compile errors when you’re building your code. Any time there’s errors that happen in run time, it’s not really great. It’s not a great developer experience. It doesn’t, you know -- it requires having a code path or a test case, it actually evaluates that path in order to actually see the run time errors. So, you know, like, although we could, like, try to make some changes in this area, it’s not clear that, like, it’s going to really be the best way to solve it and actually teach and prevent these issues from happening. + +KG: I guess I have precisely the opposite intuition about the effects of TypeScript here. To the extent that people are using TypeScript to catch errors at compile time, that’s great, and what that indicates is that the language ought to, like, not try to do something else in those cases that are, like, being explicitly excluded from the domain of valid programs by user TypeScript, the whole point is that this is supposed to be invalid if you are using TypeScript, and so I feel like for a TypeScript user, the thing that you would want would be for this to be invalid at run time as well as at compile time. Of course if you’re not using TypeScript, it’s not going to trip you up. I feel like in both these cases you want the run time error, not like there other cases. + +NRO: Like, this is the opposite. TypeScript, like, thanks to TypeScript now the communities, like, ready for this change, because TypeScript -- like, static typing or just in general helped to change that. You should not coerce things, and that coercion should be explicit. Whether it’s like an actual coercion or just as number, pretend it’s number, you’re still explicit form of coercion happenings, and, like, nobody I believe ever complained that TypeScript doesn’t allow you to pass a string to function that expects a number even if that function internally could coerce. + +DRR: So we have people complaining all the time about how, oh, TypeScript should allow me to pass in anything here because the spec just says that it gets coerced into a string anyway. And this is an example of where TypeScript tries to follow the spirit of the API or at least our interpretation of the spirit of the API, rather than what actually happens at run time, and so in cases where, you know, we believe, yeah, this probably shouldn’t have taken anything other than a string or a number or whatever, we’d refer to it that way just so that things don’t get, you know, litigated on our terms, on our types, so it makes it a little bit clearer what -- not just our intent is, but what the API intent is as well. What is the intent of the committee, and the platform. So that’s my two cents on that. + +RPR: Hax says TypeScript at least can’t solve case 4, which is the one to stop rounding. end of message. + +KG: Yeah, that’s an excellent point. + +RPR: Shane? + +SFC: Just to reply again to what Kevin said, like, if TypeScript helps at least in some cases, clearly not everywhere based from the comments that were just made, prevents these degenerate cases, what behavior we have in these degenerate cases, the important thing is it’s just a well defined behavior, and we can sort of, you know, argue over is it better to do coercion or is it better to throw errors? Maybe it is better to throw errors. I’m not saying it’s not. But, like, as long as those edge case behaviors are well defined, the best use of our time is to not spend a lot of time arguing over them. + +KG: So my position is that not everyone uses TypeScript, and TypeScript is also explicitly not trying to be sound. So these definitely are things that do happen in real life, in real JavaScript programs, even TS programs. And that it is worth trying to provide a good experience for users, not just trying to provide some behavior. + +DE: I agree with everything Kevin just said and further, isn’t the goal of this committee to discuss and focus on all the random edge cases and just argue about them for a long time? + +RPR: Philip? + +PFC: Okay. Yeah, I’d recommend that whatever conclusion we get out of this, either to stop coercing things or not coercing thing, we continue following the precedent of older APIs that we make an explicit recommendation about that for new proposal authors, otherwise we’re going to continue to get ad hoc behavior, and I think nobody will be happy with that. For example, when Temporal was in Stage 2, I think, like, all of the champions implicitly assumed that we ought to follow the precedent of older APIs, so we used `toInteger()` everywhere, and then it came up, you know, during Stage 3 with, you know, feedback from people trying out the proposal in the wild that, like, hey, here is a case where you passed a non-integral number in and it silently does the wrong thing, and this is really weird. I’m sure we look back over the temporal presentations that we brought to committee, you can see that we’ve spent a lot of champion time and a lot of committee time talking about these weird, cases and ultimately deciding that we wanted to prohibit them. So I think in order to save that time for other people working on other proposals, we need to have a very clear recommendation about whatever comes out of this discussion. + +RPR: Michael? + +MF: So my topic mainly, I think, concerns number 3 on your list. I think that there’s -- you know, in code that I see that’s not my code, it very commonly does, like, intentionally rely on some of those implicit coercions between primitives. Like, I would be surprised not to see a case like the at where you were passing a numeric string that you showed. That happens in code all the time where that value may be coming out of the DOM or something. It’s the value of an input element. And, you know, it would be -- it’s very, very uncommon for me to see that converted to a number explicitly. My question is do the developers who are writing this code know, like, best practices for how to do this? There’s a lot of ways to convert a string to a number currently. There’s parseFloat, parseInt, there is the number constructor and unary plus and others I’m forgetting. Will we have an education problem for different people learning how to do the conversion and their slight differences, and is there risk for what patterns do become popular? Maybe they have their own issues, the popular patterns have their own undesirable degenerate cases. I’m not sure. + +KG: So in a world before TypeScript, I might have been worried about that. I think the modal JS developer now uses TypeScript, and as DRR was just saying, TypeScript doesn’t let you pass wrong typed things even if they’re going to get coerced, so I think people, to whatever extent this was a problem, people have had to learn how to do the coercion and I don’t think that has caused any problems. So like we are already living in the world in which people have had to learn how to coerce things so they can get their type checker to pass, and it’s just been totally fine. So I’m just not worried about that. + +MF: Do you have data to support this claim about the modal -- + +KG: Yeah. I can pull up some surveys. But I don’t think it’s a good use time. I mean, you can Google them just as well as I can. + +MF: Yeah, I think if that is the case, that would help me be more convinced. I wasn’t of that belief. + +KG: I think it’s in excess of 50%, at least of people who answer surveys, which of course is not everyone. But certainly new developers are generally introduced to TypeScript pretty early these days. + +MF: Okay, otherwise I want to express my support for the rest of this proposal for sure. + +RPR: Shu? + +SYG: So I support this proposal, like I said before, but mainly I support it -- or largely I support it for number 5, because I want to go more into depth for the security issues. Like, it is a problem for security, in the browser security sense, of violating assumptions in implementation of built-ins and tripping up assumptions in optimized JIT code that argument coercion calls arbitrary user code via a string of value of, most of the time. We can probably go pretty far even with just saying we should stop coercing things for anything related to TypedArrays and array buffers, because most of it is around detached stuff and now that I’ve added resizable buffers, I’m sure there will be -- security bugs around reusable buffers as well, because you can trick -- because you can not trick, but construct arguments that does very surprising resizes and detaches. And that is -- and that’s motivation itself, to me, to support this proposal. Especially for number 5. Some of the other ones are more judgment calls and could be decided less -- there’s less black and white there. I think it’s very black and white for me that this is a security problem for object coercion and we should do something about it. It would be nice if we stopped coercing. We could do other things like change the coercion order, but that is, though, strictly inferior to this proposal. And I wanted to close the comment with , you know, we should remember that this behavior exists probably because this -- the -- when JS was designed, the -- it was a good -- it was a better principle for user experience for your web browser and your scripting language to keep on trucking in case of errors. Websites were smaller, interactions were much more limited and novel. Now we are in the era of large, sophisticated web apps, and JS has clearly not -- like, we know that JS has not been up to the task to scale for those kind of -- for development of that kind of software and TypeScript has filled that niche because there was, you know, real, legitimate demand for scaling to that kind of software engineering, and we should own up to that instead of keep the keep on trucking motto, which I think no longer serves us. + +KG: I completely agree. + +RPR: Nicolo + +NRO: We should try to clear some built-in as much as possible, for new methods -- for methods we introduced recently, unless it’s possible with compatibles on a case-by-case basis if we can change it, for example, if apps or the array changes in the last two years, like, people already relying on weird coercion in those cases. + +KG: Yeah, I agree, that’s probably true. Although for the change by copy, I’m less interested just because they are direct copies of existing APIs, and I think when you are just adding a new thing in the place of something is that already existed, there’s a stronger argument for keeping with precedent. I don’t really want engines to risk shipping breaking changes. So I think for the Stage 3 things, definitely. For things that already shipped, you’re probably right that we could get away with it, but I’m not sure it’s worth the cost of doing investigation. But I’ll look at the things that are like brand new. Maybe the detached -- sorry, the resizable array buffers is a good candidate for that. + +PFC: The queue item I just put on is a handy [reference link to what web IDL does](https://webidl.spec.whatwg.org/#abstract-opdef-converttoint) for their coercion algorithms, which we actually used as inspiration when making these changes in Temporal. + +PFC: Number 5, coercing objects to primitives, that actually brings a lot of extra tests in test262 as I found out. Because it involves one or more calls into user code. So in order to test this observable behavior, for every proposal that does it, you have to test that it happens. You have to test if the function is called, you have to test if the function is called before or after some other user observable behavior. And all this for behavior that is, as KG put it, is almost certainly a bug and nobody should ever be relying on. So that seems like kind of a waste of money and time. + +KG: Yeah. Definitely I have written lots of similar tests myself. + +SFC: Yeah, I have a few items here. What PFC just said is actually somewhat compelling to me, because if the way to spend less time on this is to not test it, maybe that’s a good thing. I will say that the slides that were presented here, like, focus on degenerate cases. I think it’s also good to acknowledge that not all cases for coercion are good and it’s good to acknowledge that there can be in some cases good examples of coercion, and, like, we should look at them, like, sort of both, not just focus on the coercion is always bad. If we want to move on to my next topic -- + +KG: No. I’d like to comment on that. I agree with you that there are cases where the program that you are trying to write wants to rely on coercion. I am much less convinced that implicit coercion is actually desirable, even in cases where you want coercion, I think for the benefit of your future readers, you should do -- make the coercion explicit. And it’s not like it’s hard to make the coercion explicit. While I agree that there are cases where coercion is the thing that you want, I’m not at all convinced that implicit coercion is ever actually the thing you want. Okay, ever is too strong, but like 99.99%. + +RPR: NRO? + +NRO: Yeah, like, if there are some APIs, like, in which it makes sense to coerce, like, sure we can consider them on a case-by-case basis. I think this presentation is here to set -- like, to set -- like, to define a way, like, a default way to move forward. Some proposal by default don’t coerce, and then, sure, we can always make exceptions if it makes sense in very specific cases. + +RPR: So there’s 15 minutes remaining. Back to Shane. + +KG: Yeah, once we get to 10 minutes remaining and we’re -- I’ll ask to go through these explicitly really quickly. But, yeah, we’ve got time. + +SFC: So, yeah, there are definitely other programming languages, not just JavaScript that do coercion. C++ is a really good example that comes to mind. I’m not saying it’s always the right things to did, but you know a lot of C++ developers who love writing (inaudible) because it makes the resulting code less verbose and ends up happening is there’s a lot of implicit cob conversions happening. + +KG: I agree that's a thing people write. + +SFC: But it’s definitely a style that a lot of people use and like to work with here. + +KG: I take precisely the opposite conclusion from C++. + +SYG: That is so opposite of what -- like, I cannot even process that statement. Every single C++ code base I have worked with have explicit rules, like, built into the commit queue, you can not commit code that has a single argument construct that is not marked explicit. The fact that C++ made this as the default is, like, I have seen it being treated universally as considered the wrong default. + +SFC: I’m not trying to make an argument that it’s right to do it. I’m making an argument that there are developers out there who use JavaScript and other programming languages like C++ and this is how they learn to program and what they expect to write. We’re not -- yeah. I can also go on to my next topic. + +KG: Granted, people do this. I agree this is a thing that happens. My position is they shouldn’t and we should stop letting them. + +SFC: I agree they shouldn’t. I don’t know if I agree so far that we should stop letting them. And I think that’s maybe the heart of the issue here. But can go on to my next topic? + +KG: Yeah, go ahead. + +SFC: Yeah, so just an example of, like, where coercion has been somewhat helpful for us is in the number from my prototype format function, like, you know, has calls numeric on everything for ages, and this means if you pass in a string that has a number inside of it, like, we coerce that string to a number, which meant that we were sort of converting it down into, you know, floating point format. And we were able to make a very small change with a very small API surface that, like, now we accept the string and, you know, consume all the digits of the string in a very clean way that is basically backwards and forwards compatible, which is quite nice, and I were able to do that only because we were coercing, if we hadn’t been coercing, it’s much harder for developers to write code that works backwards and forwards on the same function. Another case in Intl from my proposal where we relied on similar behavior is in the used grouping setting, and you can go back and look at the slides on that. I believe we discussed that quite a bit, you and I, on exactly what the behavior should be there. So I’m just giving like at least some evidence of, like, coercion is not always bad. + +KG: So, again, I take exactly the opposite conclusion from these examples. The -- you got away with changing the behavior of number format prototype toString -- sorry, number format prototype format to handle strings explicitly, but that meant that it was technically a breaking change. Like, you started treating strings that were previously legal as, like, having different behavior, and I would have preferred, if the method had previously only accepted numbers, and then we would have later had room to pass strings and have them change behavior from an error to being something new. Now, in that particular case it worked out and it was okay to change the behavior. But I think I don’t want to -- yeah, just in general, I think that the coercing makes those types of changes harder, not easier. + +NRO: This is not just KG’s opinion, it often happens for me when we say, let’s make it turn now so in the future we can more easily extend its behavior, because, like, we in general consider a good thing to stop throwing if we have a user, and that has always been considered much harder, not impossible, in case there was some already existing maybe weird, but not visible behavior. + +RPR: All right. Yeah, we’re coming up at the 10 minutes. Let’s go to (inaudible). + +BSH: So I just wanted -- I feel like there’s some concern that SFC has that I’m not just quite getting. All that we’re suggesting here is that in future, APIs will not coerce unless there’s some strong reason to do so instead of defaulting. What is the bad thing that you’re afraid would happen if we made that change, SFC? + +SFC: I think that’s a very loaded question. At no point did I say that I think bad things would happen if we made such a change. + +BSH: Sorry, it wasn’t intended to be loaded. I feel like there’s some concern you have that I don’t understand. + +SFC: I’m saying that -- well, if we get to my very last queue item, which was my first queue item which I was asked to delay to later, I think probably the most useful one is coercion to string, but that’s not the queue we’re on right now. I definitely think there are some cases where coercion is the right behavior. I’m not saying they’re always the case, and it’s probably perfectly reasonable to take a default behavior to not coerce in some of those cases. Also, it’s sort of hard to take the position which is, you know -- I think it’s important to express the opposite of the position of most of the people on the committee here, because I think it is important for us to take -- to get to the bottom of this. And I think that, yeah, we should continue going through the queue. + +### asking for consensus + +KG: Okay. I actually want to not continue going through queue. Shane, I know that means that we won’t get to your item - in fact I will briefly get to it. But since we’re short on time, I want to make sure we have a chance to go through the less controversial ones of these. So I just want to go through each of these in turn and ask for explicit consensus on them and in some of the cases that I won’t ask, because we’ve heard objections and there’s not going to be time to go through all of the discussion of that. But some of them we haven’t, so I want to hopefully get agreement on controversial ones. So in particular, I would like to ask for committee consensus on the statement that is on the screen here. It’s not a -- not as a universal rule, just as the default, that any in if you APIs that take integral numbers that take NaN and anything could treat NaN as a range error. We’ve had general support, and unless anyone objects to these, I will treat that on consensus is. + +RPR: We’ve got explicit support from PFC, CDA, LCA, so, yeah, you have explicit support. And I’m not hearing any objections. + +KG: Okay, thanks very much. Next, don’t coerce undefined to other things. RPR: Hang on. Sorry, Dan has a question. + +DE: No, no, sorry, it’s unclear to me what the scope is. Are we making these hard decisions about all new APIs? + +KG: No, very explicitly, these are just the defaults. If you are coming back with a proposal in the future, if you want to deviate from these rules, you should tell the committee why. If you’re designing something and there’s no particular reason to do it differently, you should do it this way. + +DE: Thanks. + +KG: Okay, next one. Same thing, but with coercing missing arguments and undefined to anything else, so if this is a JS function which takes a string, don’t treat undefined as the string "undefined", etc. I’d like to ask for explicit consensus on this. I see we have a question from Shu. Go ahead. + +SYG: I have a question about this. So this is the direction -- so if you don’t pass anything where it defaults to undefined, we don’t coerce that, or if you explicitly pass undefined, we coerce to that. You’re not proposing anything in this case about the -- oh, okay, maybe I just misunderstood. This is to cover both explicit and error mismatches. + +KG: Right now the language treats those the same and I want to continue treating them the same. The web platform doesn’t treat them the same, but JS does. So, yes, to cover both cases. + +RPR: And NRO has a question. + +NRO: Shu’s question answered, would this need to throw if there are more arguments than expected? + +KG: Yeah. I am not proposing anything around that at this time. Only about missing required arguments. + +SYG: And you’re also not proposing that if we’re -- if we’re -- if the proposal has a method with optional arguments to use, like, the not present language over the undefined language, just that by default, if you don’t do anything, it throws? + +KG: Correct. So for things where there’s optional arguments, undefined and missing will be treated identically and will get the default. It’s only in the case of required arguments. Just don’t coerce undefined is the rule. + +RPR: Shane? + +SFC: If you pass a null where there’s an explicit argument required? + +KG: Not talking about null on this slide at all, only undefined. + +SFC: I see. + +RPR: All right, so there remains two voices of support. Any objection to this? No objections. + +KG: Okay. And then the more general one I’m not going to ask for consensus for, because we heard from Bradford that -- and other people, I think it was Bradford, anyway, we heard that there’s more use cases for this. I still think it’s warranted, but we’re not going to have time to come to consensus on other types of coercion, so I’m not going to ask for consensus on this. I will probably come back in a future meeting and continue this topic. + +KG: Next, stop rounding on integral numbers, and this includes things that coerce to non-integral numbers. If you pass something which coerces to the number 1.5 and that gets truncated to 1, my proposal would be that would be a range error. + +RPR: We have support from Dan, Nicolo, Philip, and Michael has made -- I’m sorry -- + +SFC: (from queue) needs more discussion. + +KG: Okay. I’m happy to take that as not consensus, although for future discussion, Shane, could you say what you don’t like about this. + +SFC: There Intl APIs that take these and I need more time to review what the impact would be. + +KG: Sounds good. I’ll also plan on bringing this one back in the future. + +KG: Stop coercing objects, we didn’t have a chance to get to Shane’s item about this, but, yes, granted it is occasionally useful. I still think that this change is warranted, but since we’re not going to have time to talk about it in sufficient detail, I will not ask for consensus for this one at this time. However, I would at least like to stop coercing arrays to primitives - actually, no, okay, I’m not going to ask for consensus at this time, because it would be subsumed by the other one, coercing objects to primitives, so at a future meeting we'll talk about coercing objects to primitives, and if we come to consensus we would like to continue to coerce objects to primitives in general, I will still ask about maybe making arrays an exception to that rule. Since we haven’t had time to discuss coercing objects, I’m not going to ask for consensus on arrays at this time. + +### next steps and summary + +KG: I do want to comment on the next steps. I will review existing Stage 3 proposals and come back with normative changes where appropriate to have them follow these rules. I’ll also glance at anything that’s Stage 4 that I think might be worth making these changes for. And suggest making changes to those as well if there’s appetite. I will also follow up with web IDL for the things we have just got consensus for and let them know of the ongoing discussions for other topics and propose clangs to web IDL, although it’s going to be like a massive pain just in terms of modifying all of the existing specifications. + +RPR: One minute for the summary. + +KG: Okay, great. To summarize, we have discussed the topic of reducing coercion in general and while acknowledging that this has potential for confusion for developers who have learned the expectation that things whether coercion, we as a whole think that doing less coercion is probably worth it. Despite the break with precedent. Concretely, we got consensus for no longer coercing NaN and things that coerce to NaN for integer taking APIs and got con sen fuss for not coercing up defined to any other thing, and this includes missing arguments, applies only to undefined, not to null or any other primitive, doesn’t apply to additional arguments or anything just when there’s an undefined or missing argument for a required argument that will no longer be coerced. We did not get consensus, although not explicit lack on consensus on coercing primitive types to other primitive types. We’ll continue that discussion later. We did get consensus for -- no, those the are the only things we got consensus for. + +RPR: Sorry, Shane has a slight disagreement with -- + +SFC: So you said that we agree that we should not coerce despite developers sometimes expecting to be coerced. I don’t think we agreed on, like, that policy as a committee. We did agree that there’s definitely cases where it’s really bad. + +KG: Okay, we agreed that the level of coercion that we are doing right now should be reduced. Can we say that? + +SFC: Yes. + +KG: Okay, good. So we didn’t get consensus on the other items discussed. In particular, we did not get consensus for refraining for primitive types or stopping truncation of numbers or stopping coercion of objects to primitives in general. But I will plan to come back to discuss those more later. So let’s all look forward to that, I guess. And again, these are not to be taken as hard and fast rules, just as the defaults to follow in the absence of compelling reasons to do otherwise. + +RPR: Thank you, Kevin. And so, Kevin and Shane, please do review the summary of this. Thank you. + +### Summary + +Committee agrees that the level of coercion that we are doing right now should be reduced, despite concerns about developer confusion caused by new inconsistency, without universal agreement about which concrete cases. There were cases made for the utility of string<->number coercion and of coercing objects to strings, and SFC wanted more time to consider implications of no longer accepting non-integral numbers in integer-taking APIs. Consensus was reached that in new proposals we should default (in the absence of a particular reason to do otherwise) to not coercing NaN to 0 and to not coercing undefined (or missing arguments) to any other type (in the case of required arguments). + +## Reducing wasted effort due to proposal churn (continuation) + +Presenter: Michael Ficarra (MF) + +- [slides](https://docs.google.com/presentation/d/1V3Fg6HVC-VA41YCu0Yhqynvqhsu5kVj7tiWuVfp8S90/) + +RPR: Michael, do you want to -- okay, you’re already sharing? Do you want to do any intro or should we just go straight to Philip on the queue? + +MF: Let’s go to the queue. + +RPR: Philip? + +PFC: My experience with having written 262 tests, not just for temporal, but also in the capacity of having worked on the Google funded efforts to write tests for Stage 3 proposals, it is -- it is really important to have a clear signal of stability to write the test effectively, because if you are writing the test at the same time that champions are going back and redesigning things, it’s quite a lot of extra work. And I guess actually, this was a response to Eemeli saying why do we need an extra stage for this. So my experience is that, yes, if we are going to require this then we should have an extra stage. + +SFC: I’m just observing that educator feedback has also been normally one of the things that we accept as a sense of Stage 3, like, modification, and -- but it’s not listed here, it’s only listed implementer feedback. My understanding is we’ve long understood educator feedback to also be a Stage 3 type. + +DE: We should be collecting educator feedback before Stage 3–collecting this feedback earlier is why I set up the educator outreach group in the first place. Educator-type feedback, feedback about the design, abstractly, we should do all the work that we can to collect that at this design stage, then we can feed that into what gets developed in tests and implementations. Frequently, educators don’t hear about things until thing are already Stage 3, and I think our efforts should be focused on outreach. I understand this type of outreach before Stage 3 can make some committee members nervous because it maybe makes things too hyped, but on the other hand, this would save everyone the work of changing something later. This is something I feel very strongly about. + +SYG: What is this? Okay, I wrote this when the feedback was mostly positive. I’m not sure that’s my current read. I don’t remember. It might be more mixed. But I do have some ideas for concrete implementation details, namely, I’ve been a proponent of lowering the barrier of test writing for test262 to ease more correctors into it and to entice the implementers to write 262 tests instead of engine-specific test, and I think we also heard in chat from Ron Buckton on the that it’s not like a barrier to entry is curb currently that it’s not clear on windows or it’s hard on Windows to run the harness and test262. So concrete implementation here, I think a big part of this would be to do even possibly even more work to lower barrier of entry for writing test262. But at least not requiring anything more formal event staging, I think, for this new stage. I think since Stage 4 already requires the staging test to be graduated out of staging into the main trunk for this new stage to be as -- to introduce as little friction as possible, I think we should limit it to requiring no more formal event staging. I thought I had some other thoughts, but I don’t really remember. So I’ll leave it at that. + +EAO: So regarding the mostly positive state of affairs, I think I agree with that. Particularly what SYG said earlier ended up convincing me a this sounds like a very good idea. I’d be happy for us to move on. I do have thoughts on bikeshedding about the name for this one, though. + +DE: So I completely agree with SYG’s comment that, if we are requiring tests to get to Stage 3, the requirements on those early tests should be looser than for the final Stage 4 tests. In particular, there are lots of tests that are developed in the context implementing a feature in a JS engine, sometimes written in that engine’s test framework. I’m wondering what the appetite is from people working in tests in JavaScript implementations–there’s a lot of people in that group–in sharing those tests through the test262 staging directory. We could use web-platform-tests-style two way synchronization between each engine and the test262 staging directory, so all the JavaScript engines can share tests with each other. This is something that’s done in web platform tests, and we have the opportunity with a lot of work to potentially do it with JavaScript as well. + +NRO: Yeah, I cannot -- like, I think we would be using this staging directly much less with this proposal. Because, like, staging directories mostly used, like, when agents have to write all of their new tests for proposals, and in these cases, they would still write their own tests, but, like, a very big part of tests for the future would be the variable before the implementation starts. + +DE: I’m proposing radically greater use of the staging directly, and I’m interested in more feedback, but that’s not the current topic today. [Note from DE: sharing tests earlier might indeed result in fewer different versions being written which are testing the same thing, so I agree with NRO as well.] + +SYG: I agree with DE. I’m proposing expanded use of staging for this new stage. + +SFC: So I think one thing that happens a lot with these proposals that reach later stages is that, like implementers give feedback and then, like, as a result of the implementer feedback, the proses churn a lot and then the implementers are scared, I don’t want to test this proposal again. It’s changing too much. Right? And this how we get a lot of proposals that are in Stage 3 that are never implemented, because they get to Stage 3 and get to first round of feedback and then churn forever and never get implemented, right? I was wondering if maybe an approach we should consider take here when a prose poll reaches Stage 3 and gets implementer feedback it goes back to Stage 2 or 2 and three-quarters can and you resolve the feedback and atomically say here is my thing and implementers take a second look at it and maybe that happens three or four times. I think for small protestor poses it happens once or twice, for temporal three or four times and that could be a very year check points, and otherwise we get into the problem where there’s a proposal that shouldn’t be a Stage 3 proposal because now there’s bigger design questions. + +SYG: What are line of proposals get stuck in Stage 3? I don’t think that’s an accurate description. + +SFC: I’m looking at Intl Segmenter, Intl DurationFormat, and Temporal which have all had the problem of feedback from implementers that there are issues in Stage 3. That is a continuing concern which scares away implementers. + +SYG: I would exclude temporal. The other two I can’t speak. But temporal is just, to me, a proposal that just does not work in the staging model. It is too big and too complex to ever fit into our staging model. + +DE: I’m not sure I agree with SFC’s assessment of what makes Stage 3 proposals churn–those particular cases had some late design-level feedback from implementers. To keep things from getting stuck in Stage 3, I think more check points could be good, as PKA has on the agenda this meeting–asking, “hey, how is this proposal doing?”. I feel like this is pretty orthogonal from the staging changes or testing changes. What are you proposing that we do? + +SFC: I’m proposing concretely, if some definition of a critical mass of feedback comes in from implementers or elsewhere, when a proposal reaches Stage 3, that we actually formally move the thing back to Stage 2 or Stage 2 3/4 or whatever we call it. + +DE: Yeah, I think we’ve been establishing this practice more frequently of moving proposals back to earlier stages. And if we add this new stage, presumably that will be another target stage to be considering. Our practice is conservative–we require consensus whenever changing a proposal stage, whether up or down. Let’s keep reviewing proposals in committee, and reconsidering stages when appropriate. + +MLS: So actually I’ll go ahead and speak. I agree with SYG. I don’t think that this back and forth is what’s stopping proposals from being implemented. I do -- I personally believe that the stage that a proposal is at is the signal to implementers. We have moved some proposals back to Stage 2 because there were some significant changes in syntax or naming or things like that. And we take that to heart with what we do. Often times we’ll just turn off the code if it’s already in and fix it. But I don’t think that -- I don’t agree, Shane, that there’s this back and forth is what’s slowing us down. And I’ll speak to Temporal. Temporal is a huge proposal. And there’s a lot of work to finish it to make that, you know, part of implementation. We’re working on it, we’re working on it slowly. + +PFC: I have to agree with SYG about temporal not fitting the staging model. + +SFC: So just thinking about DurationFormat as a sort of recent example of this, and to be clear, it’s not the only example of this. Is, like, you know, it might be useful to sort of formalize or as part of this discussion, since we’re talking about stage processes, like, what criteria we should use to say that a proposal should be downgraded from Stage 3 back to Stage 2, because I think, you know, we’ve been talking a lot about the whole ready to ship signal. And like I think what happens sometimes is that, you know, there’s a problem in Stage 3, for example, duration format, and I love that JSC is shipping duration format, but there’s still a couple normative changes we’re merging. In an ideal world, all developers would wait until they’re merged and we can go back to Stage 3 and we can ship it again. I feel like Stage 3 is a very good signal that this proposal is shippable. I guess another thing I’m trying to say is maybe in ready to ship is, like, something that we want to codify as stage 3 ¾ and Stage 3 is that thing, ready to thing, then, you know, that means if there’s a proposal that has normative changes of any substantial size, whatever that might be, whatever criteria that might be, then, like, we should be very explicit, okay, that’s red flagged by default it goes to Stage 2 and you have to reapply for Stage 3. + +MLS: This is a worthy discussion. I’m not sure this is what MF had in mind when he prepared this. We’ve been using Stage 3 to Stage 2 demotion recently. I think we should maybe have some formal discussions about why we do that and what gets you back to Stage 3. + +DE: Yeah, I think we’ve been saying Stage 3 is time to implement native engines and everybody makes their own policies about when they want to ship things. Anyway, I think we do have an emerging criterion here, which is that if the committee has consensus that we disagree with core parts of the design, then it’s not ready to implement in native engines and Stage 3 isn’t appropriate. I say ‘native engines’ because I really want to encourage polyfill and transpiler and engine262 implementations before Stage 3. + +RPR: WH asks: how do you write tests for things like syntax without an implementation? We can not hear you, Waldemar, if you are speaking. And we’ve got two minutes left. Do you want -- I think really we should probably be summarizing. + +MF: I don’t think the -- I think my previous summary applies. I’d rather use the time to hear about -- + +DE: Yeah. I mean, I think you write a test without an implementation the way you write an implementations without a test, which is in a way that is partly correct and also has some bugs. So the spiral comment from Michael applies–you go back and forth. This has been done repeatedly, like, ES6 tests before there were engines fully complete. We’ve been trying to do this with proposals like Temporal: developing tests and polyfills before Stage 3. Even if we don’t adopt a new stage, it’d still be good to do this kind of work before Stage 3 as a best practice–we’d get benefit. But I do think adopting a new stage would be the more solid way of saying you know, we’ve experimented with this good practice and we want to, you know, codify it. + +RPR: Right. Shu, do you want to have the last word about spiraling? + +SYG: What’s the next comment? + +RPR: We are at time. + +SYG: I would like to let Michael Ficarra decide on the queue which one he wants to do. + +MF: Go ahead, Shu. + +SYG: My spiel about spiraling is with -- since we talk about Temporal, I think something that’s become year to me over the years is that TC39’s working model in stark contrast with web incubation and how other features are developed in other languages and platforms is that we design up front that is explicitly how we work and that’s explicitly how we have worked, the design up front part is there’s friction there because that is in fact not how software in general is developed. Software in general is developed in a spiral. There is iteration, we find bugs, you ask users for feature, you come back and you iterate. But TC39 works by always designing up front and throwing it over the wall. And we are trying to make that -- we’re not trying to abandon that model because it a pluses for democratization and being on the same page, for more stability, et cetera, et cetera, but there’s down sides, and this is one of the down sides. And I think this proposal is a good intermediate, incremental improvement to still keep the design up front model. But at the same time, it’s pointed out that, you know, things like temporal -- I think the design up front model just doesn’t scale to the size of things like Temporal, that requires the usual software design model of iterating and spiraling and back and forth. And a spinoff conversation we could have is if we want to tackle those huge things, what should we do. Like, is something what uly proposed with ethics enough. Should we take industry best practices that we know how to use software and go to incubation. That’s a conversation I’d like to have in the future. + +RPR: All right. We’re past time, so Michael, do you want to quickly summarize. + +MF: No, same summary as before, I think. + +DE: Michael Saboff had one pretty critical point. Can we get to that? Can we go over time. + +RPR: Go ahead, MLS. + +MLS: I have grave concerns that this will actually not resolve in improvements intended. I understand the desire of the reason we’re doing this, but I think if you add a new stage, you add new delay and my spiral discussion is more about that off times the implementation drives a test, and test drives implementations back and forth, I think that’s kind of the comment that WH wanted to make, is sometimes we start with tests and without implementation and vice versa. So I think our current staging actually works, we just need to be more diligent about what we do during Stage 3, writing tests for implementations first or writing together. + +RPR: All right. Okay. So is there anything more you wanted to say, Michael? + +### Summary + +RE the "spiraling", the thing we need to keep in mind is that it's fine if we have *an* implementation during stage 2.75 (DE suggested polyfills/transpilers, but it can also be an engine), it's just the idea is to avoid recommending for general implementation, which is where that aggregate work is extremely high. + +### Conclusion + +No change to the stage process is made. + +## Closing + +RPR: Thank you very much. All right, then, so we are done with our agenda. I think we should have a round of applause for our hosts here at Bergen. Mikhail and your assistants. And, yeah, thank you to the observers and students that have participated. I hope this has been fun and interesting. And, good. So the -- + +RPR: Julie, thank you so much for your work. And everybody who helped us with notes. And the note takers. Obviously ACE has done a lot, as has JKP, CHU, I’m sure there’s more people in the room as well. So, yeah, thank you so much for that. I think this has been an excellent meeting. + +SHN: I want to thank you all for taking the time to do the summaries. Much appreciated. + +RPR: Yes. So obviously we’ll do the usual posting of the notes. Please do review the summaries and the transcripts. And then also hope to see any of you that can make it at the next meeting. That is in the end of September in Tokyo, being held at the Bloomberg offices, so all the details are on the reflector. And I think we should probably also have a conversation about whether to mandate masks at that event or not. So welcome to feedback on that. All Right. + +RPR: We are done.