Skip to content

An open source python library for scalable Bayesian optimisation.

License

Notifications You must be signed in to change notification settings

MIND-Lab/dragonfly

 
 

Repository files navigation


Dragonfly is an open source python library for scalable Bayesian optimisation.

Bayesian optimisation is used for optimising black-box functions whose evaluations are usually expensive. Beyond vanilla optimisation techniques, Dragonfly provides an array of tools to scale up Bayesian optimisation to expensive large scale problems. These include features/functionality that are especially suited for high dimensional optimisation (optimising for a large number of variables), parallel evaluations in synchronous or asynchronous settings (conducting multiple evaluations in parallel), multi-fidelity optimisation (using cheap approximations to speed up the optimisation process), and multi-objective optimisation (optimising multiple functions simultaneously).

Dragonfly is compatible with Python2 (>= 2.7) and Python3 (>= 3.5) and has been tested on Linux, macOS, and Windows platforms. For documentation, installation, and a getting started guide, see our readthedocs page. For more details, see our paper.

 

Installation

See here for detailed instructions on installing Dragonfly and its dependencies.

Quick Installation: If you have done this kind of thing before, you should be able to install Dragonfly via pip.

$ sudo apt-get install python-dev python3-dev gfortran # On Ubuntu/Debian
$ pip install numpy
$ pip install dragonfly-opt -v

Testing the Installation: You can import Dragonfly in python to test if it was installed properly. If you have installed via source, make sure that you move to a different directory to avoid naming conflicts.

$ python
>>> from dragonfly import minimise_function
>>> # The first argument below is the function, the second is the domain, and the third is the budget.
>>> min_val, min_pt, history = minimise_function(lambda x: x ** 4 - x**2 + 0.1 * x, [[-10, 10]], 10);  
...
>>> min_val, min_pt
(-0.32122746026750953, array([-0.7129672]))

Due to stochasticity in the algorithms, the above values for min_val, min_pt may be different. If you run it for longer (e.g. min_val, min_pt, history = minimise_function(lambda x: x ** 4 - x**2 + 0.1 * x, [[-10, 10]], 100)), you should get more consistent values for the minimum.

If the installation fails or if there are warning messages, see detailed instructions here.

 

Quick Start

Dragonfly can be used directly in the command line by calling dragonfly-script.py or be imported in python code via the maximise_function function in the main library or in ask-tell mode. To help get started, we have provided some examples in the examples directory. See our readthedocs getting started pages (command line, Python, Ask-Tell) for examples and use cases.

Command line: Below is an example usage in the command line.

$ cd examples
$ dragonfly-script.py --config synthetic/branin/config.json --options options_files/options_example.txt

In Python code: The main APIs for Dragonfly are defined in dragonfly/apis. For their definitions and arguments, see dragonfly/apis/opt.py and dragonfly/apis/moo.py. You can import the main API in python code via,

from dragonfly import minimise_function, maximise_function
func = lambda x: x ** 4 - x**2 + 0.1 * x
domain = [[-10, 10]]
max_capital = 100
min_val, min_pt, history = minimise_function(func, domain, max_capital)
print(min_val, min_pt)
max_val, max_pt, history = maximise_function(lambda x: -func(x), domain, max_capital)
print(max_val, max_pt)

Here, func is the function to be maximised, domain is the domain over which func is to be optimised, and max_capital is the capital available for optimisation. The domain can be specified via a JSON file or in code. See here, here, here, here, here, here, here, here, here, here, and here for more detailed examples.

In Ask-Tell Mode: Ask-tell mode provides you more control over your experiments where you can supply past results to our API in order to obtain a recommendation. See the following example for more details.

For a comprehensive list of uses cases, including multi-objective optimisation, multi-fidelity optimisation, neural architecture search, and other optimisation methods (besides Bayesian optimisation), see our readthe docs pages (command line, Python, Ask-Tell)).

 

Contributors

Kirthevasan Kandasamy: github, webpage
Karun Raju Vysyaraju: github, linkedin
Anthony Yu: github, linkedin
Willie Neiswanger: github, webpage
Biswajit Paria: github, webpage
Chris Collins: github, webpage

Acknowledgements

Research and development of the methods in this package were funded by DOE grant DESC0011114, NSF grant IIS1563887, the DARPA D3M program, and AFRL.

Citation

If you use any part of this code in your work, please cite our JMLR paper.

@article{JMLR:v21:18-223,
  author  = {Kirthevasan Kandasamy and Karun Raju Vysyaraju and Willie Neiswanger and Biswajit Paria and Christopher R. Collins and Jeff Schneider and Barnabas Poczos and Eric P. Xing},
  title   = {Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly},
  journal = {Journal of Machine Learning Research},
  year    = {2020},
  volume  = {21},
  number  = {81},
  pages   = {1-27},
  url     = {http://jmlr.org/papers/v21/18-223.html}
}

License

This software is released under the MIT license. For more details, please refer LICENSE.txt.

For questions, please email kandasamy@cs.cmu.edu.

"Copyright 2018-2019 Kirthevasan Kandasamy"

About

An open source python library for scalable Bayesian optimisation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 90.8%
  • Fortran 8.8%
  • Other 0.4%