MathOptAI.jl is a JuMP extension for embedding trained AI, machine learning, and statistical learning models into a JuMP optimization model.
MathOptAI.jl is provided under a BSD-3 license as part of the Optimization and Machine Learning Toolbox project, O4806.
See LICENSE.md for details.
Despite the name similarity, this project is not affiliated with OMLT, the Optimization and Machine Learning Toolkit.
Install MathOptAI.jl
using the Julia package manager:
import Pkg
Pkg.add("MathOptAI")
Here's an example of using MathOptAI to embed a trained neural network from Flux
into a JuMP model. The vector of JuMP variables x
is fed as input to the
neural network. The output y
is a vector of JuMP variables that represents the
output layer of the neural network. The formulation
object stores the
additional variables and constraints that were added to model
.
julia> using JuMP, MathOptAI, Flux
julia> predictor = Flux.Chain(
Flux.Dense(28^2 => 32, Flux.sigmoid),
Flux.Dense(32 => 10),
Flux.softmax,
);
julia> #= Train the Flux model. Code not shown for simplicity =#
julia> model = JuMP.Model();
julia> JuMP.@variable(model, 0 <= x[1:28^2] <= 1);
julia> y, formulation = MathOptAI.add_predictor(model, predictor, x);
julia> y
10-element Vector{VariableRef}:
moai_SoftMax[1]
moai_SoftMax[2]
moai_SoftMax[3]
moai_SoftMax[4]
moai_SoftMax[5]
moai_SoftMax[6]
moai_SoftMax[7]
moai_SoftMax[8]
moai_SoftMax[9]
moai_SoftMax[10]
Documentation is available at https://lanl-ansi.github.io/MathOptAI.jl.
For help, questions, comments, and suggestions, please open a GitHub issue.
This project is mainly inspired by two existing projects:
Other works, from which we took less inspiration, include:
The 2024 paper of López-Flores et al. is an excellent summary of the state of the field at the time that we started development of MathOptAI.
López-Flores, F.J., Ramírez-Márquez, C., Ponce-Ortega J.M. (2024). Process Systems Engineering Tools for Optimization of Trained Machine Learning Models: Comparative and Perspective. Industrial & Engineering Chemistry Research, 63(32), 13966-13979. DOI: 10.1021/acs.iecr.4c00632