Skip to content

Latest commit

 

History

History
117 lines (89 loc) · 3.9 KB

meeting_00008_20200811.md

File metadata and controls

117 lines (89 loc) · 3.9 KB

Rust ML WG Meeting 00008

Meeting Info

Date: 20200811

Start time: 1600 ET

Zoom link: https://ucsb.zoom.us/j/96310195426

  • Meeting ID: 963 1019 5426

Agenda

  • chris m is working on a new draft of the working group's Github rust-ml/wg repository README, to better reflect what's actually happening

    • Would like some input on those ideas
  • Request for people to add/update the Task Board

    • Is this still something that people find useful, and/or do people actually know about it?
  • Archiving the GitHub "Discussion" repository, and suggesting further discussion on GH of WG stuff be conducted in issues of the wg repository.

    • Generally, the group's GH account is kind of messy, with a few dead repositories that might make sense to archive.

Participants

  • Chrism
  • tiberio ferreira
  • degausser
  • Paul K

Minutes

New rust-ml/wg/README.md

  • new draft is up rust-ml/wg#3
    • We went through the new changes and made some comments on the PR

Linfa 0.2

  • after that announce Working Group in some forums or This Week in Rust to get more interest/help from other people
  • close to a release on this
  • we should make sure the chat, repos, and READMEs are all up-to-date before annoucements on this
Where to talk about Linfa
  • Reddit r/Rust && r/machinelearning;
  • This Week in Rust
  • https://users.rust-lang.org/
  • Maybe post on internals
  • Discord post on Linfa and the Working Group

Discussion Archival

  • Yeah let's archive it
  • An ongoing issue for issues with the group (meta issues)
  • also archive the classical-discussion repo, and the nlp-discussion repo

Task Board

  • Ricky- Likes it, but we should focus on updating it more
  • Chrism- agrees, we should update it more

Tsuga

  • Completed benchmarking recently
    • activation functions for the layers as parallel operations
      • slows things down for NIST 15% overhead as a parallel process
  • tested on MNIST dataset
    • NIST crate
    • converts array to ndarray
    • 3.5s 92% acc
    • acc on 3-layer network gets to about 97%
  • strong points for Tsuga
    • code is more readable
    • uses simple datastructures
      • 32-bit 2D inputs
    • forward and back pass code is ~20LoC
    • scales through layers
      • don't have to hardcode layer numbers
    • likes the way API works for creating and training networks
      • often unlooked in ML
      • especially in Rust
  • working on new much more complicated dataset

Linfa 0.2 Roadmap and Updates

  • Adding algorithms
    • both they are working on won't block 0.2
  • more critical to have updated README
  • how much more do they want for 0.2?
    • Luca has set the current Roadmap, and no issues broken out for them
    • should have good first issues before hand for new contributors
  • contributor guide
  • guide for what they are doing with the API
    • no firm idea of what the API should look like
    • however, there are patterns emerging
  • Issue templates and CI
  • how much polishing should take place before 0.2 happens
  • README, contributor guide, roadmap updates (at least some), and good first issues
    • before 0.2

Actions

  • Archive repositories (discussion, classical-discussion, nlp-discussion)
  • Possibly have a blog on arewelearningyet.com
    • post about tsuga successes
  • relook at MIT licenses?
    • what is everyone else using that isn't MIT or Apache2.0
    • machine learning can be used for evil easily so how do put good effort to stop that
  • Ricky and Chris take a look at the linfa documentation
  • Code coverage help with linfa CI?
    • direct message with owner of the tool on Zulip