Releases: logicalclocks/maggy
Releases · logicalclocks/maggy
Maggy 1.0.0rc0
This is the first release candidate for the first major Maggy release 1.0.0.
Features
This release contains many new features, which will be documented on maggy.ai.
These include:
- Distribution transparency for distributed training, hyperparameter optimization and ablation studies.
- Distributed training support for PyTorch, including DeepSpeed ZeRO
- Distributed training support for TensorFlow, using MultiWorkerMirroredStrategy
Release 0.4.2
This release changes the LICENSE to Apache V2.
Features
- apply Black code formatting
- allow access to optimization direction in optimizer
- Add IDLE message to allow for idle executors (in preparation for Bayesian Opt)
- [ablation] Support for Keras custom models
- Makes Searchspace a sorted iterable
- Adapt to tensorboard 1.15
Bugfixes
- unpin numpy version dependency
- remove ExperimentDriver from public API
- [ablation] Fixes TF pickling issue
Release 0.4.0
Features
- Adapts Maggy to the new Experiments V2 service in Hopsworks 1.1.0
- Adds the TensorBoard HParams plugin
- Jupyter Notebook gets versioned automatically
- versioned resources are removed
- Returns separate log files per trial
- Improves Exception handling
Bugfixes
- Fixes bugs related to internal exceptions and handles exceptions more gracefully by returning a stacktrace
Release 0.3.3
Features
Bugfixes
- Fixes a bug when using custom dataset generators in the ablation api
Release 0.3.2
Features
Bugfixes
- Makes defaults coherent
Release 0.3.1
Features
Bugfixes
- Fixes a bug when running single trials, where defaults were not set properly
- Fixes the way exceptions are thrown
Release 0.3.0
Features
- This release makes Maggy ready for Hopsworks 1.0.0
- Adds Ablation Studies
Bugfixes
- Fixes a bug when initialising a custom ASHA experiment
Release 0.2.2
Features
- this release makes Maggy ready for Hopsworks 0.10.0
- Adds a SingleRun optimizer so users can run model training only once with
experiment.lagom(train)
- It is now possible to run multiple Maggy experiments from the same yarn app with the progress information and logging
- Using
print
in the training wrapper function will propagate the prints from the Spark Executors to Jupyter and display them underneath the Jupyter cell.The0: Train on 60000 samples, validate on 10000 samples 1: x_train shape: (60000, 28, 28, 1)
0:
and1:
indicate from which machine the prints are coming.
This feature should be used with care.
Bugfixes
- Fixes a serialization Error when user function returns numpy data type
Release 0.2.1
- Adds the ASHA Optimizer
- Refactors the Developer API
- Metric messages with logs were processed twice
- Adding a 6 second grace period to keep Maggy alive until spark magic polled the last logs
- Change trial executor to send logs with finalization of a trial so last logs don't get lost
Release 0.2
This release integrates Maggy with Hopsworks.
- Maggy driver registers with Hopsworks
- Maggy driver allows Hopsworks to poll for executor logs
- Maggy driver collects logs from executors
- Users can log in their training function with
reporter.log()
- Numpy is added as dependency
- Maggy logs to HopsFS
Note: This version works only on Hopsworks.