Skip to content

rsact_ui

Hazer Hazer edited this page Nov 26, 2024 · 1 revision

The rsact_ui is the place where widgets, layouts, styles, events, and other UI things live.

Core concepts

Widget and El

Let's start with Widget, the trait each widget implements. Its methods define the widget's lifetime behavior:

  • meta: builds a tree of Memos giving meta-information about widgets. Interactive elements have id meta-info and often the behavior flag FOCUSABLE (for example to interact with it through an encoder). Take a button as a simple example of meta implementation
  • on_mount: this method is called only once on each widget, the moment it (or its parent) is mounted to UI, it accepts MountCtx reactive context to subscribe to global UI states, for example, theme styles.
  • layout: this is just a getter for Signal<Layout> each widget stores inside.
  • build_layout_tree: similar to meta, builds a tree of layouts of itself and its children elements. The tree is MemoTree, i.e. reactive tree, and is rebuilt for widget in case of children layout change.
  • draw: the actual drawing (rendering) of the widget. The draw method does not (and should not) create any new reactive states, but use them. draw is reactively-tracked, if no reactive state used in particular widget draw method changes, it won't be redrawn.
  • on_event: describes the behavior of the widget on receiving event. Example: button press.

In addition to the Widget trait, there are two helper traits: SizedWidget and BlockModelWidget:

  • SizedWidget contains helper methods such as fill, shrink for convenient widget sizing.
  • BlockModelWidget gives handy control over widget layout block model (like CSS box model), defining border, padding, etc.

You don't need to implement methods from SizedWidget and BlockModelWidget, they all have default implementation. Just implement these traits if you need.

The El is a wrapper struct around dyn Widget, it Boxes widget to be stored as a child. In most cases you don't deal with Widget trait by itself, but with El.

WidgetCtx

WTF or WidgetTypeFamily is a collection of types specific to your platform, consisting of Renderer, Event, Styler and PageId, and passed everywhere. It is the only implementor of WidgetCtx trait that widgets and other structures depend on and accept. You don't need to care about WidgetCtx until you code a logic where elements are received or returns, for example component function. More on this further. You can see common W: WidgetCtx generic be passed everywhere in rsact_ui.

Renderer

Can be found in WidgetCtx::Renderer. It's pretty low-level concept and I will document deeper it one day, but for now you only need to know that there's only a single Renderer implementation -- LayeringRenderer, built for embedded_graphics DrawTarget implementations. It supports layers, antialiasing, clipped and cropped sub-parts.

WidgetCtx::Event

This is the base Event trait, that requires some system-level implementations: FocusEvent + ExitEvent + DevToolsToggle + DevElHover, that I hope I'll get rid of one day to make it more generic. Event by itself is not only a super-trait of events, but a concept of widget-specific logic. For example, Button has corresponding ButtonEvent which you can implement to be pressed on screen touch, encoder button click, mouse click, key press, etc. That makes events generic and up to you to control how widgets should act on input. For example, you can have a separate hardware button that only controls clicking UI buttons, but not toggling checkboxes.

"Up to you" is nice to say, but hard to implement, and I understand it, thus there are some predefined "just-use" event types. For now, it's only simulator_single_encoder created for use with embedded_graphics_simulator simulating single encoder with button input device. But I plan to add more, such as PC (mouse + keyboard).

How it looks, by the way? You declare some kind of Event enum, and implement ButtonEvent for it, then pass into the tick method in hot-loop.

enum Event {
    ButtonUp,
    ButtonDown,
}

impl ButtonEvent for Event {
    fn as_button_press(&self) -> bool {
        matches!(self, Self::ButtonDown)
    }

    fn as_button_release(&self) -> bool {
        matches!(self, Self::ButtonUp)
    }
}

fn main() {
    // ...
    let events: Vec<Event> = get_events();
    loop {
        ui.tick(events)
    }
}

rsact will pass your events through the widget tree, stopping on the button accepted the event.

Widget::on_event

Widget::on_event method returns EventResponse<W> as a handling result. EventResponse is an alias to core::ops::ControlFlow<Capture<W>, Propagate>, which allows handy propagations in on_event implementations. A particular widget can either capture an event, stopping further propagation by returning Capture, or continue the propagation with Propagate. While Propagate means ignoring the event, Capture is a bit more complex:

  • Widget can just capture the event with Capture::Captured
  • Or bubble it with Capture::Bubbled(BubbledData) to send some data to parents. The second is used to implement focusing, as an example.

[Style]

I'm working on some changes around it, so don't want to document much, as logic may change a lot.

Pages

Page takes your whole screen, containing some root element and can be routed between. Pages are identified by generic type WidgetCtx::PageId type, you can use strings, a custom enum listing all possible pages, or a built-in SinglePage type that disallows multiple pages in case you only need one. Page is what currently rendered in UI, it contains data related to its contents such as layout tree, meta tree, etc.

The routing is done through UiQueue, more on that later, but here's an example:

// ...
let queue = UiQueue::new();
let page1 = Button::new("Go to page 2").on_click(move || queue.goto(2));
let page2 = Button::new("Go to page 1").on_click(move || queue.goto(1));

let mut ui = UI::new(...)
    .with_page(1, page1)
    .with_page(2, page2)
    .with_queue(queue);

// loop {} tick UI

Layouts

Layout is a declarative representation of how element and its children need to be sized, positions and aligned on screen. The type LayoutKind consists of several layout kinds covering most use-cases UI framework requires:

  • Zero is a stub for undefined layout and considered an error being present in layout tree.
  • Edge is a childless leaf in layout tree. Does not have any properties except common ones. Can be found in visuals-only widgets such as Bar or Image where obviously no children exist.
  • Content is used for content-carrying widgets such as text or icon where only content size matters. You rarely use it, it is more for internal core widgets.
  • Container adds block model to the Content layout, along with vertical and horizontal alignment properties. You use it with a single child.
  • Flex is the most complex kind of layout supporting Container properties, capable of arranging multiple children by specified axis. Supports wrapping and per-axis gaps between children.

Layouts are not used directly, until you define your own Widget type, but instead layout-specific widgets exist:

  • Edge: does not have common use-cases, nevertheless you can create styled blocks with it such as colored rectangle.
  • Container: container layout widget can be used to align a single child, add padding and border around it.
  • Flex: the only way (for now) to have multiple children in widget, it is the most used widget type. It is more similar to SwiftUI VStack and HStack rather than flexbox from CSS.
Clone this wiki locally