Skip to content

Latest commit

 

History

History
93 lines (55 loc) · 5.12 KB

README.md

File metadata and controls

93 lines (55 loc) · 5.12 KB

What it does

  1. Tests open source copulas package against real data, by submitting to www.microprediction.org

How you use this repository

  1. Fork it
  2. Open up this notebook in colab and run it to generate yourself a write key.
  3. Save the key as a Github secret called WRITE_KEY (instructions)
  4. Click on "accept" when Github asks you if you want to enable github actions. Go to Actions and you'll see the only action used in your repo (like this one). You should be able to enable it.

That's all. Later, go to www.microprediction.org and plug your write key into the dashboard. You'll see something like this, eventually.

If you are curious about step 2, see instructions for other ways, and a cheesy video explaining that a WRITE_KEY is a Memorable Unique Identifier.

If you'd rather not fork, just copy fit.py and daily.yml as that's pretty much it.

Do you like fitting multivariate densities or Copulas?

Modify your fit.py (similar to fit.py provided). For example you can change the choice of Copula, or use an entirely different technique. The only important thing is that you spit out 225 "scenarios".

Here's what's up:

  • Anyone can publish live data repeatedly, like this say, and it creates a stream like this one.
  • Some github repos like this one make regular predictions (there are also some algorithms like this guy that use running processes instead)

Why?

Free prediction for all means free bespoke business optimization for all.

  • Yes it really is an api that predicts anything.
  • Yes it also makes it easier to see which R, Julia and Python time series approaches seem to work best, saving you from trying out hundreds of packages from PyPI and github, any one of which might be of uncertain quality.

More background if you want it...

Here's a first glimpse for the uninitiated, some categories of business application, some remarks on why microprediction is synomymous with AI due to the possibility of value function prediction, and a straightforward plausibility argument for why an open source, openly networked collection of algorithms that are perfectly capable of managing each other will sooner or later eclipse all other modes of production of prediction. In order to try to get this idea off the ground, there are some ongoing competitions and developer incentives.

Video tutorials...

Video tutorials are available at https://www.microprediction.com/python-1 to help you get started.

Presentations

Presentations at Rutgers, MIT and elsewhere can be found in the presentations repo. A book will be published by MIT Press in 2021 on the topic. There are links to video presentations in some of the blog articles.

This repository

Now, back to this repo.

  • It's minimalist and simple.
  • It shows you how to make predictions using Github actions.
  • It fits z-streams only.

That's all.

Aside: What are z-streams?

Glad you asked. See An Introduction to Z-Streams or the microprediction frequently asked questions. Put simply, some of the seemingly univariate time series such as this one are really multi-variate implied copulas. You can retrieve them in multivariate format using the .get_lagged_copulas or .get_lagged_zvalues methods of the MicroReader.

Aside: Why does it only fit z-streams and not the other ones?

The z-stream distributions don't change as quickly as the values themselves.

Install

If you grepped for 'install' ...

This repo isn't intended to be used as a package. 

Go back to the top of this readme.