Skip to content
This repository has been archived by the owner on Oct 20, 2023. It is now read-only.

Model 3.0: The Context Central Model

Yu Liu edited this page Dec 28, 2018 · 12 revisions

The goals of this newly proposed model is to improve upon Model 1.0: Instance_Dataset_User_Configuration and Model 2.0: Context_Table_Client_Adapter by:

  1. Generify the definitions of the data structure with defining the concept of Contexts
  2. Better integrate data from other teams and data sources. e.g. The Blue Alliance
  3. Hide implementation details from the model definition i.e. File structures
  4. Define a versatile, decentralized programming interface

Context

The Context of a set of data is the scope and state in which the data is relevant for a specific analysis. For example, to find out how some teams are performing at a specific event, the context would be that event, whereas to find out how they performed in the entire season, the context would be the year. Contexts are defined by a set of metrics. For example, they could be analyzing a district, an alliance, a set of scouts, etc. A Context will always have a Data Source, a Version, and a delegated Adapter, which are the key to integrating different sets of data. Given two Contexts, A and B, A is a Supercontext of B if its set of metrics is a strict subset of B's metrics. For example, the Context of a year is always a Supercontext of events in that year. Contrarily, A is a Subcontext of B if B is a Supercontext of A. For a Context, the action of converting itself to a Supercontext and Subcontext is called Upcasting and Downcasting respectively

A Context must:

  1. Provide a set of metrics with their contexual values
  2. Provide a comparison function with any other Context
  3. Provide the set of Tables with their respective metrics managed by this Context
  4. lookup an return a Table based on a set of arbitrary metrics
  5. Be able to determine if an upcast/downcast operation is possible
  6. Create a Downcast Context based on table metrics
  7. Accept an Adapter to upcast from other Contexts
  8. Retain and manage a set of Pipelines connected to this Context
  9. Provide an interface for Pipelines to cache their data
  10. Manage configuations for the Clients of this Context
  11. Manage a set of Context Loaders

Context Loader

Loaders determine for the user whether there are some other Contexts available for use upon request. When they do, they are able to return a pipeline and adapter for those Context. For example, loaders can check if there is an USB inserted into the computer that has some data on it, or if The Blue Alliance is accesible. It would then create and return the Pipeline to that Context.

A Loader must:

  1. Belong to a Context and holds a reference to it
  2. Use the Context's metrics and tables to determine what data the Context might want
  3. Check the availabilty of those data by converting the requested data in the target format
  4. Be able to return a Pipeline based on the requested data

Context Pipeline

Context pipelines provide the ability to transfer data via some type of streaming. For example, a pipeline for The Blue Alliance is resposible for managing the API key and make requests to the server to fetch the data. Another one might be used to store the current data into an USB key. A pipeline takes as an input of what to send or to fetch and then performs the operation on request based on the interface of the target Context. The result is then cached in the Stash of the Context and available for an Adapter to use. A Pipeline is always single-directional. They can either fetch data from the upstream or push data to the downstream. Pipelines are essential for the system to be decentralized

A Pipeline must:

  1. TODO

Context Table

Tables are used to represent specific entities of data that are part of a Context. A Table is structured as a 2 dimensional data frame with column headers but no specific row index, where the order of both is irrelevant. A Context contains a set of Tables, each with a distinct row type and set of columns (i.e. data points). For example, in the Context of a specific FRC event, there may be a Table for each match in the event, a Table for each team's stats in the event, a Table for each person who was scouting, and a Table for each entry of data (e.g. team per scout per match)

A Table must:

  1. TODO

Context Adapter

Adapters contribute to the actual values of the data using either existing values in the Tables or from another source of input. They can modify the Tables in various ways and is the real "analysis" component of this model. A Context with a data source that does not use Tables always has a delegated adapter to convert the format so that data can be merged together. Data in one Context is invalid in other contexts unless they are converted with an Adapter to fit the other Context

An Adapter must:

  1. TODO

Data Client

Clients are the accessors of the computed data that present it to the user in some way. A client component does not modify the underlying data structure but can build on top of it in some way. Clients interact with the user and are allowed to save their viewing configuration. However, this data cannot be transferred using Pipelines because they have an arbitrary format based on what client it is

A Client must:

  1. Provide its save configuration upon request
  2. Accept its save configuration upon start

Data Metrics

Metrics are used to define the unit of each type of data. They may include:

  1. Team
  2. Match
  3. Alliance
  4. Match Type
  5. Driver Station
  6. Scout
  7. Event
  8. Year