Releases: thieu1995/mealpy
Releases · thieu1995/mealpy
v2.1.2
Update
- Some algorithms have been updated in the GitHub release but not on PyPI such as Sparrow Search Algorithm.
- I try to synchronize the version on GitHub and PyPI by trying to delete the vers
- Add examples for applications of mealpy such as:
- Tuning hyper-parameter of neural network
- Replacing Gradient Descent optimizer in neural network
- Tuning hyper-parameter for other models such as SVM,...
v2.1.1
Update
- Replace all .copy() operator by deepcopy() operator in module copy. Because shallow copy causing the problem with
nested list inside list. Especially when copying population with nested of list position inside agent. - Add the Knapsack Problem example: examples/applications/discrete-problems/knapsack-problem.py
- Add the Linear Regression example with Pytorch: examples/applications/pytorch/linear_regression.py
- Add tutorial videos "How to use Mealpy library" to README.md
v2.1.0
Change models
- Move all parallel function to Optimizer class
- Remove unused methods in Optimizer class
- Update all algorithm models with the same code-style as previous version
- Restructure some hard algorithms include BFO, CRO.
Change others
- examples: Update examples for all new algorithms
- history: Update history of MHAs
- parallel: Add comment on parallel and sequential mode
- Add code-of-conduct
- Add the complete example: examples/example_full_v210.py
v2.0.0
Version 2.0.0
Change models
- Update entire the library based on Optimizer class:
- Add class Problem and class Termination
- Add 3 training modes (sequential, thread and process)
- Add visualization charts:
- Global fitness value after generations
- Local fitness value after generations
- Global Objectives chart (For multi-objective functions)
- Local Objective chart (For multi-objective functions)
- The Diversity of population chart
- The Exploration verse Exploitation chart
- The Running time chart for each iteration (epoch / generation)
- The Trajectory of some agents after generations
- My batch-size idea is removed due to the parallel training mode
- User can define the Stopping Condition based on:
- Epoch (Generation / Iteration) - default
- Function Evaluation
- Early Stopping
- Time-bound (The running time for a single algorithm for a single task)
Change others
- examples: Update examples for all new algorithms
- history: Update history of MHAs
v1.2.2
Change models
-
Add Raven Roosting Optimization (RRO) and its variants to Dummy group
- OriginalRRO: The original version of RRO
- IRRO: The improved version of RRO
- BaseRRO: My developed version (On this version work)
-
Add some newest algorithm to the library
- Arithmetic Optimization Algorithm (AOA) to Math-based group
- OriginalAOA: The original version of AOA
- Aquila Optimizer (AO) to Swarm-based group
- OriginalAO: The original version of AO
- Archimedes Optimization Algorithm (ArchOA) to Physics-based group
- OriginalArchOA: The original version of ArchOA
- Arithmetic Optimization Algorithm (AOA) to Math-based group
Change others
- examples: Update examples for all new algorithms
- history: Update history of MHAs
v1.2.1
- Add Coyote Optimization Algorithm (COA) to Swarm-based group
- Update code LCBO and MLCO
- Add variant version of:
- WOA: Hybrid Improved WOA
- DE:
- SADE: Self-Adaptive DE
- JADE: Adaptive DE with Optional External Archive
- SHADE: Success-History Based Parameter Adaptation DE
- LSHADE: Linear Population Size Reduction for SHADE
- PSO: Comprehensive Learning PSO (CL-PSO)
Change others
- examples: Update all the examples based on algorithm's input
v1.2.0
Change models
-
Fix bug reduction dimension in FOA
-
Update Firefly Algorithm for better timing performance
-
Add Hunger Games Optimization (HGS) to swarm-based group
-
Add Cuckoo Search Algorithm (CSA) to swarm-based group
-
Replace Root.__init__() function by super().__init()__ function in all algorithms.
Change others
- history: Update new algorithms
- examples: Update all the examples based on algorithm's input
v1.1.1-alpha
- Fix bug reduction dimension in FOA
- Update Firefly Algorithm for better timing performance
- Add Hunger Games Optimization (HGS) to swarm-based group
v1.1.0
Version 1.1.0
Change models
-
Update the way to passing hyper-parameters to root.py file (Big change)
-
Update all the hyper-parameters to all algorithms available.
-
Fix all the division by 0 in some algorithms.
Change others
- examples: Update all the examples of all algorithms
v1.0.5
Change models
-
System-based group added:
- Water Cycle Algorithm (WCA)
-
Human-based group added:
- Imperialist Competitive Algorithm (ICA)
- Culture Algorithm (CA)
-
Swarm-based group added:
- Salp Swarm Optimization (SalpSO)
- Dragonfly Optimization (DO)
- Firefly Algorithm (FA)
- Bees Algorithm (Standard and Probilistic version)
- Ant Colony Optimization (ACO) for continuous domain
-
Math-based group:
- Add Hill Climbing (HC)
-
Physics-based group:
- Add Simulated Annealling (SA)
Change others
- models_history.csv: Update history of meta-heuristic algorithms
- examples: Add examples for all of above added algorithms.