diff --git a/docs/source/chapt_surrogates/gradients.rst b/docs/source/chapt_surrogates/gradients.rst new file mode 100644 index 000000000..5bc2d25f9 --- /dev/null +++ b/docs/source/chapt_surrogates/gradients.rst @@ -0,0 +1,83 @@ +.. _gengrad: + +Gradient Generation to Support Gradient-Enhanced Neural Networks +================================================================ + +Neural networks are useful in instances where multivariate process data +is available and the mathematical functions describing the variable +relationships are unknown. Training deep neural networks is most efficient +when samples of the variable derivatives, or gradients, are collected +simultaneously with process data. However, gradient data is often unavailable +unless the physics of the system are known and predetermined such as in +fluid dynamics with outputs of known physical properties. + +These gradients may be estimated numerically using solely the process data. The +gradient generation tool described below requires a Comma-Separated Value (CSV) file +containing process samples (rows), with inputs in the left columns and outputs in the rightmost +columns. Multiple outputs are supported, as long as they are the rightmost columns, and +the variable columns may have string (text) headings or data may start in row 1. The method +produces a CSV file for each output variable containing gradients with respect to each input +variable (columns), for each sample point (rows). After navigating to the FOQUS directory +*examples/other_files/ML_AI_Plugin*, the code below sets up and calls the gradient generation +method on the example dataset *MEA_carbon_capture_dataset_mimo.csv*: + +.. code:: python + + # required imports + >>> import pandas as pd + >>> import numpy as np + >>> from generate_gradient_data import generate_gradients + >>> + >>> data = pd.read_csv(r"MEA_carbon_capture_dataset_mimo.csv") # get dataset + >>> data_array = np.array(data, ndmin=2) # convert to Numpy array + >>> n_x = 6 # we have 6 input variables, in the leftmost 6 columns + + >>> gradients = generate_gradients( + >>> xy_data=data_array, + >>> n_x=n_x, + >>> show_plots=False, # flag to plot regression results during gradient training + >>> optimize_training=True, # will try many regression settings and pick the best result + >>> use_simple_diff=True # flag to use simple partials instead of chain rule formula; defaults to False if not passed + >>> ) + >>> print("Gradient generation complete.") + + >>> for output in range(len(gradients)): # save each gradient array to a CSV file + >>> pd.DataFrame(gradients[output]).to_csv("gradients_output" + str(output) + ".csv") + >>> print("Gradients for output ", str(output), " written to gradients_output" + str(output) + ".csv",) + +Internally, the gradient generation methods automatically executes a series of actions on the dataset: + +1. Import process data of size *(m, n_x + n_y)*, where *m* is the number of sample rows, +*n_x* is the number of input columns and *n_y* is the number of output columns. Given *n_x*, +the data is split into an input array *X* and an output array *Y*. + +2. For each input *xi* and each output *yj*, estimate the gradient using a multivariate +chain rule approximation. For example, the gradient of y1 with respect to x1 is +calculated at each point as: + +:math:`\frac{Dy_1}{Dx_1} = \frac{dy_1}{dx_1} \frac{dx_1}{dx_1} + \frac{dy_1}{dx_2} \frac{dx_2}{dx_1} + \frac{dy_1}{dx_3} \frac{dx_3}{dx_1} + ...` + +where *D/D* represents the total derivative, *d/d* represents a partial derivative at each +sample point. *y1*, *x1*, *x2*, *x3*, and so on are vectors with values at each sample point *m*, and +this formula produces the gradients of each output with respect to each input at each sample point by iterating +through the dataset. The partial derivatives are calculated by simple finite difference. For example: + +:math:`\frac{dy_1}{dx_1} (m_{1.5}) = \frac{y_1 (m_2) - y_1 (m_1)}{x_1 (m_2) - x_1 (m_1)}` + +where *m_1.5* is the midpoint between sample points *m_2* and *m_1*. As a result, this scheme +calculates gradients at the points between the sample points, not the actual sample points. + +3. Train an MLP model on the calculated midpoint and midpoint-gradient values. After normalizing the data +via linear scaling (see :ref:`mlaiplugin.datanorm`), +the algorithm leverages a small neural network model to generate gradient data for the actual +sampe points. Passing the argument *optimize_training=True* will train models using the optimizers +*Adam* or *RMSProp*, with activation functions *ReLu* or *Sigmoid* on hidden layers, using a *Linear* +or *ReLu* activation function on the output layer, building *2* or *8* hidden layers with *6* or *12* +neurons per hidden layer. The algorithm employs cross-validation to check the mean-squared-error (MSE) loss +on each model and uses the model with the smallest error to predict the sample gradients. + +4. Predict the gradients at each sample point from the regressed model. This produces *n_y* +arrays with each having size *(m, n_x)* - the same size as the original input array *X*. + +5. Concatenate the predicted gradients into a single array of size *(m, n_x, n_y)*. This is the +single object returned by the gradient generation method. diff --git a/docs/source/chapt_surrogates/index.rst b/docs/source/chapt_surrogates/index.rst index 1e9bc9eb6..2768b8ff1 100644 --- a/docs/source/chapt_surrogates/index.rst +++ b/docs/source/chapt_surrogates/index.rst @@ -7,6 +7,7 @@ Contents .. toctree:: :maxdepth: 2 + gradients mlaiplugin reference tutorial/index diff --git a/docs/source/chapt_surrogates/mlaiplugin.rst b/docs/source/chapt_surrogates/mlaiplugin.rst index 6876d4d38..eefeaa00f 100644 --- a/docs/source/chapt_surrogates/mlaiplugin.rst +++ b/docs/source/chapt_surrogates/mlaiplugin.rst @@ -1,3 +1,5 @@ +.. _mlaiplugin: + Machine Learning & Artificial Intelligence Flowsheet Model Plugins ================================================================== @@ -99,7 +101,7 @@ Currently, FOQUS supports the following custom attributes: bounds for each output variable (default: (0, 1E5)) - *normalized* – Boolean flag for whether the user is passing a normalized neural network model; to use this flag, users must train their models with - data normalized according to a specifc scaling form and add all input and + data normalized according to a specific scaling form and add all input and output bounds custom attributes. The section below details scaling options. - *normalization_form* - string flag required when *normalization* is *True* indicating a scaling option for FOQUS to automatically scale flowsheet-level @@ -108,6 +110,8 @@ Currently, FOQUS supports the following custom attributes: - *normalization_function* - optional string argument that is required when a 'Custom' *normalization_form* is used. The section below details scaling options. +.. _mlaiplugin.datanorm: + Data Normalization For Neural Network Models -------------------------------------------- diff --git a/examples/other_files/ML_AI_Plugin/generate_gradient_data.py b/examples/other_files/ML_AI_Plugin/generate_gradient_data.py new file mode 100644 index 000000000..59ae6693b --- /dev/null +++ b/examples/other_files/ML_AI_Plugin/generate_gradient_data.py @@ -0,0 +1,363 @@ +################################################################################# +# FOQUS Copyright (c) 2012 - 2023, by the software owners: Oak Ridge Institute +# for Science and Education (ORISE), TRIAD National Security, LLC., Lawrence +# Livermore National Security, LLC., The Regents of the University of +# California, through Lawrence Berkeley National Laboratory, Battelle Memorial +# Institute, Pacific Northwest Division through Pacific Northwest National +# Laboratory, Carnegie Mellon University, West Virginia University, Boston +# University, the Trustees of Princeton University, The University of Texas at +# Austin, URS Energy & Construction, Inc., et al. All rights reserved. +# +# Please see the file LICENSE.md for full copyright and license information, +# respectively. This file is also available online at the URL +# "https://github.com/CCSI-Toolset/FOQUS". +################################################################################# + +# Authors: Brayden Gess, Brandon Paul + +import numpy as np +import pandas as pd +import matplotlib.pyplot as plt + +# Data preprocessing +from sklearn.preprocessing import MinMaxScaler +from sklearn.model_selection import train_test_split + +# Neural Net modules +import tensorflow as tf +from tensorflow.keras import Input +from tensorflow.keras.models import Model +from tensorflow.keras.layers import Dense +from tensorflow.keras.callbacks import EarlyStopping + +import os +import random as rn + +# set seed values for reproducibility +os.environ["PYTHONHASHSEED"] = "0" +os.environ[ + "CUDA_VISIBLE_DEVICES" +] = "" # changing "" to "0" or "-1" may solve import issues +np.random.seed(46) +rn.seed(1342) +tf.random.set_seed(62) + + +def finite_difference(m1, m2, y1, y2, n_x, use_simple_diff=False): + """ + Calculate the first-order gradient between provided sample points m1 and + m2, where each point is assumed to be a vector with one or more input + variables x and exactly one output variable y. y1 is the value of y1 at m1, + and y2 is the value of y at m2. + + The total gradient is calculated via chain rule assuming a multivariate + function y(x1, x2, x3, ...). In the notation below, D/D denotes a total + derivative and d/d denotes a partial derivative. Total derivatives are + functions of all (x1, x2, x3, ...) whereas partial derivatives are + functions of one input (e.g. x1) holding (x2, x3, ...) constant: + + Dy/Dx1 = (dy/dx1)(dx1/dx1) + (dy/dx2)(dx2/dx1) + (dy/dx3)(dx3/dx1) +... + + Note that (dx1/dx1) = 1. The partial derivatives dv2/dv1 are estimated + between sample points m1 and m2 as: + + dv2/dv1 at (m1+m2)/2 = [v2 at m2 - v2 at m1]/[v1 at m2 - v1 at m1] + + The method assumes that m1 is the first point and m2 is the second point, + and returns a vector dy_dm that is the same length as m1 and m2; m1 and m2 + must be the same length. y1 and y2 must be float or integer values. + """ + + def diff(y2, y1, x2, x1): + """ + Calculate derivative of y w.r.t. x. + """ + dv2_dv1 = (y2 - y1) / (x2 - x1) + + return dv2_dv1 + + mid_m = [None] * n_x # initialize dm vector, the midpoints of m1 and m2 + dy_dm = [None] * n_x # initialize dy vector, this is dy_dm(midpoints) + + for i in range(n_x): # for each input xi + if use_simple_diff: + dy_dm[i] = diff(y2, y1, m2[i], m1[i]) + else: # use chain rule + dy_dm[i] = sum( + diff(y2, y1, m2[j], m1[j]) + * diff(m2[j], m1[j], m2[i], m1[i]) # dy/dxj # dxj/dxi + for j in range(n_x) + ) # for each input xj + + mid_m[i] = m2[i] - m1[i] + + return mid_m, dy_dm + + +def predict_gradients( + midpoints, + gradients_midpoints, + x, + n_m, + n_x, + show_plots=True, + optimize_training=False, +): + """ + Train MLP regression model with data normalization on gradients at + midpoints to predict gradients at sample point. + + Setting random_state to an integer and shuffle to False, along with the + fixed seeds in the import section at the top of this file, will ensure + reproducible results each time the file is run. However, calling the model + training multiple times on the same data in the same file run will produce + different results due to randomness in the random_state instance that is + generated. Therefore, the training is performed for a preset list of model + settings and the best option is selected. + """ + # split into X_train and X_test + # always split into X_train, X_test first THEN apply minmax scaler + print("Normalizing data...") + X_train, X_test, y_train, y_test = train_test_split( + midpoints, + gradients_midpoints, + test_size=0.2, + random_state=42, # for reproducibility + shuffle=False, + ) # for reproducibility + print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) + + # use minMax scaler + print("Normalizing data...") + min_max_scaler = MinMaxScaler() + X_train = min_max_scaler.fit_transform(X_train) + X_test = min_max_scaler.transform(X_test) + + print("Training gradient prediction model...") + best_loss = 1e30 # insanely high value that will for sure be beaten + best_model = None + best_settings = None + progress = 0 + + if optimize_training: + optimizers = ["Adam", "rmsprop"] + activations = ["relu", "sigmoid"] + act_outs = ["linear", "relu"] + num_neurons = [6, 12] + num_hidden_layers = [2, 8] + else: + optimizers = [ + "Adam", + ] + activations = [ + "relu", + ] + act_outs = [ + "linear", + ] + num_neurons = [ + 6, + ] + num_hidden_layers = [ + 2, + ] + + for optimizer in optimizers: + for activation in activations: + for act_out in act_outs: + for neuron in num_neurons: + for num_hidden_layer in num_hidden_layers: + progress += 1 + if optimize_training: + print( + "Trying ", + optimizer, + "solver with ", + activation, + "on hidden nodes, ", + act_out, + "on output node with ", + neuron, + "neurons per node and ", + num_hidden_layer, + "hidden layers", + ) + inputs = Input( + shape=X_train.shape[1] + ) # input node, layer for x1, x2, ... + h = Dense(neuron, activation=activation)(inputs) + for num in range(num_hidden_layer): + h = Dense(neuron, activation=activation)(h) + outputs = Dense(n_x, activation=act_out)( + h + ) # output node, layer for dy/dx1, dy/dx2, ... + model = Model(inputs=inputs, outputs=outputs) + # model.summary() # see what your model looks like + + # compile the model + model.compile(optimizer=optimizer, loss="mse", metrics=["mae"]) + + # early stopping callback + es = EarlyStopping( + monitor="val_loss", + mode="min", + patience=50, + restore_best_weights=True, + ) + + # fit the model! + # attach it to a new variable called 'history' in case + # to look at the learning curves + history = model.fit( + X_train, + y_train, + validation_data=(X_test, y_test), + callbacks=[es], + epochs=100, + batch_size=50, + verbose=0, + ) + if len(history.history["loss"]) == 100: + print("Successfully completed, 100 epochs run.") + else: + print( + "Validation loss stopped improving after ", + len(history.history["loss"]), + "epochs. Successfully completed after early stopping.", + ) + print("Loss: ", sum(history.history["loss"])) + if optimize_training: + print("Progress: ", 100 * progress / 32, "%") + + if sum(history.history["loss"]) < best_loss: + best_loss = sum(history.history["loss"]) + best_model = model + best_history = history + best_settings = [ + optimizer, + activation, + act_out, + neuron, + num_hidden_layer, + ] + + if optimize_training: + print("The best settings are: ", best_settings) + + if show_plots: + history_dict = best_history.history + loss_values = history_dict["loss"] # you can change this + val_loss_values = history_dict["val_loss"] # you can also change this + epochs = range(1, len(loss_values) + 1) # range of X (no. of epochs) + plt.plot(epochs, loss_values, "bo", label="Training loss") + plt.plot(epochs, val_loss_values, "orange", label="Validation loss") + plt.title("Training and validation loss") + plt.xlabel("Epochs") + plt.ylabel("Loss") + plt.legend() + plt.show() + + gradients = best_model.predict(x) # predict against original sample points + + return gradients + + +def generate_gradients( + xy_data, n_x, show_plots=True, optimize_training=False, use_simple_diff=False +): + """ + This method implements finite difference approximation and NN regression + to estimate the first-order derivatives of a given dataset with columns + (x1, x2, ...., xN, y1, y2, ..., yM) where N is the number of input + variables and M is the number of output variables. The method takes an + array of size (m, n_x + n_y) where m is the number of samples, n_x is the + number of input variables, and n_y is the number of output variables. The + method returns an array of size (m, n_x, n_y) where the first dimension + spans samples, the second dimension spans gradients dy/dx for each x, and + the third dimension spans gradients dy/dx for each y. + + For example, passing an array with 100 samples, 8 inputs and 2 outputs will + return an array of size (100, 8, 2) where (:, :, 0) contains all dy1/dx and + (:, :, 1) contains all dy2/dx. + + The workflow of this method is as follows: + 1. Import xy data in array of size (m, n_x + n_y) and split into x, y + 2. Generate dy in n_y arrays of size (m-1, n_x) which correspond to + points between samples + 3. Normalize x, dy on [0, 1] and train MLP model dy(x) for each dy + 4. Predict dy(x) for m samples from xy data to generate n_y arrays of + size (m, n_x) which correspond to sample points + 5. Concatenate predicted gradients into array of size (m, n_x, n_y) + """ + + # split data into inputs and outputs + x = xy_data[:, :n_x] # there are n_x input variables/columns + y = xy_data[:, n_x:] # the rest are output variables/columns + n_y = np.shape(y)[1] # save number of outputs + n_m = np.shape(y)[0] # save number of samples + + gradients = [] # empty list to hold gradient arrays for multiple outputs + + for output in range(n_y): + print("Generating gradients for output ", output, ":") + # estimate first-order gradients using finite difference approximation + # this will account for all input variables, but will be for the midpoints + # between the sample points, i.e. len(y) - len(dy_midpoints) = 1. + # in both midpoints and gradients_midpoints, each column corresponds to an + # input variable xi and each row corresponds to a point between two samples + midpoints = np.empty((n_m - 1, n_x)) + gradients_midpoints = np.empty((n_m - 1, n_x)) + + # get midpoint gradients for one pair of samples at a time and save + for m in range(n_m - 1): # we have (n_m - 1) adjacent sample pairs + print("Midpoint gradient ", m + 1, " of ", n_m - 1, " generated.") + midpoints[m], gradients_midpoints[m] = finite_difference( + m1=x[m, :], + m2=x[m + 1, :], + y1=y[m][output], # each entry in y is an array somehow + y2=y[m + 1][output], # each entry in y is an array somehow + n_x=n_x, + use_simple_diff=use_simple_diff, + ) + print("Midpoint gradient generation complete.") + print() + + # leverage NN regression to predict gradients at sample points + gradients.append( + predict_gradients( + midpoints=midpoints, + gradients_midpoints=gradients_midpoints, + x=x, + n_m=n_m, + n_x=n_x, + show_plots=show_plots, + optimize_training=optimize_training, + ) + ) + + return gradients + + +if __name__ == "__main__": + data = pd.read_csv(r"MEA_carbon_capture_dataset_mimo.csv") + data_array = np.array(data, ndmin=2) + n_x = 6 + + gradients = generate_gradients( + xy_data=data_array, + n_x=n_x, + show_plots=False, + optimize_training=True, + use_simple_diff=True, + ) + print("Gradient generation complete.") + + for output in range(len(gradients)): + pd.DataFrame(gradients[output]).to_csv( + "gradients_output" + str(output) + ".csv" + ) + print( + "Gradients for output ", + str(output), + " written to gradients_output" + str(output) + ".csv", + ) diff --git a/examples/other_files/ML_AI_Plugin/gradients_output0.csv b/examples/other_files/ML_AI_Plugin/gradients_output0.csv new file mode 100644 index 000000000..6cbbf41ab --- /dev/null +++ b/examples/other_files/ML_AI_Plugin/gradients_output0.csv @@ -0,0 +1,103 @@ +,0,1,2,3,4,5 +0,74.01828,132.16856,-37.867424,-87.99398,-4.1455474,-29.112787 +1,80.10569,147.16621,-40.357662,-95.16894,-3.3632255,-31.500113 +2,79.45062,141.44975,-40.728428,-94.467354,-4.5550265,-31.185713 +3,80.09741,143.0444,-40.987965,-95.22719,-4.474018,-31.456068 +4,83.3853,149.3718,-42.61538,-99.136185,-4.526995,-32.69882 +5,68.366295,122.46076,-34.90481,-81.26287,-3.730377,-26.934528 +6,98.03055,171.53583,-50.67336,-116.58603,-6.4520106,-38.59053 +7,97.89941,172.60396,-50.445877,-116.42936,-6.072017,-38.41172 +8,83.01357,150.70213,-42.10721,-98.65618,-3.971518,-32.60649 +9,71.396675,136.51456,-34.725723,-84.223755,-2.0283308,-28.99291 +10,93.8227,169.31696,-47.76738,-111.52925,-4.75127,-36.766186 +11,87.57114,153.17346,-45.266582,-104.14352,-5.784079,-34.50622 +12,77.11264,136.21832,-39.681473,-91.69841,-4.7168374,-30.304417 +13,90.61951,159.05618,-46.771446,-107.76673,-5.829528,-35.66367 +14,95.282,167.36357,-49.171406,-113.3156,-6.089981,-37.4559 +15,95.61301,165.93967,-49.6347,-113.73363,-6.6623554,-37.62568 +16,89.286415,157.34029,-46.010975,-106.183365,-5.5629764,-35.063023 +17,86.075874,154.30821,-43.971436,-102.3324,-4.6421576,-33.758076 +18,88.97119,155.19272,-46.05375,-105.81409,-5.9943185,-35.06345 +19,85.13099,148.73717,-44.019444,-101.2385,-5.674104,-33.58378 +20,79.27477,139.77596,-40.826813,-94.26976,-4.9238853,-31.177721 +21,68.19138,123.46287,-34.609493,-81.03175,-3.3662899,-26.888262 +22,93.974236,169.21736,-47.90398,-111.716484,-4.85899,-36.815624 +23,83.95978,147.13693,-43.360966,-99.84655,-5.46732,-33.07123 +24,80.32151,145.7025,-40.758068,-95.45836,-3.8735087,-31.55181 +25,87.33303,156.8335,-44.577415,-103.825325,-4.6336923,-34.233963 +26,100.16098,176.2134,-51.658722,-119.11959,-6.3200517,-39.332443 +27,100.61913,176.86285,-51.913834,-119.66431,-6.394028,-39.529224 +28,88.05725,155.98763,-45.272488,-104.71868,-5.2560325,-34.51785 +29,75.13442,134.63,-38.37448,-89.31733,-4.077221,-29.527359 +30,82.87425,148.91374,-42.27488,-98.51663,-4.3797503,-32.53275 +31,82.475716,148.13205,-42.075085,-98.040596,-4.379695,-32.39845 +32,79.820496,148.22089,-39.87293,-94.75434,-2.972318,-31.737362 +33,103.239876,184.07578,-52.91633,-122.764824,-5.8291836,-40.402317 +34,83.44963,147.62373,-42.917164,-99.233864,-5.044079,-32.77074 +35,77.814964,141.65369,-39.39349,-92.4633,-3.6255062,-30.626474 +36,72.576965,131.80801,-36.788624,-86.244194,-3.465675,-28.566385 +37,73.26434,131.98471,-37.3039,-87.07939,-3.7878819,-28.821802 +38,96.98463,171.53969,-49.905228,-115.34031,-5.8591366,-38.00519 +39,95.46655,169.51312,-49.04027,-113.533226,-5.5802684,-37.35544 +40,88.204475,154.2815,-45.597893,-104.8987,-5.8239093,-34.742092 +41,83.82496,147.37799,-43.23154,-99.68545,-5.3224497,-32.97493 +42,72.0031,129.25752,-36.725876,-85.5848,-3.848767,-28.34243 +43,101.20366,179.36389,-52.027626,-120.35538,-6.0127025,-39.63646 +44,83.56118,148.32939,-42.912643,-99.366585,-4.9050317,-32.7624 +45,84.01179,148.07623,-43.28163,-99.90703,-5.2288985,-33.01481 +46,82.814964,149.50114,-42.142643,-98.43732,-4.186265,-32.498432 +47,91.13173,164.59996,-46.36689,-108.32369,-4.581602,-35.74409 +48,90.79159,164.3494,-46.141026,-107.91509,-4.4643555,-35.602654 +49,95.26023,168.18198,-49.054127,-113.28893,-5.843741,-37.365513 +50,74.87018,135.7826,-37.98163,-88.97286,-3.6262012,-29.462618 +51,88.55916,159.22435,-45.16972,-105.27757,-4.64986,-34.731316 +52,91.57796,161.18896,-47.217155,-108.909546,-5.7591033,-35.97436 +53,78.71591,141.59416,-40.12848,-93.570305,-4.1193304,-30.908072 +54,95.76671,171.49147,-48.960106,-113.860565,-5.212231,-37.52514 +55,95.20761,164.35713,-49.598755,-113.285774,-6.8527017,-37.324512 +56,93.35032,162.17128,-48.433243,-111.038414,-6.4628286,-36.74515 +57,79.45699,147.86624,-39.623432,-94.30814,-2.8812304,-31.659342 +58,73.95901,132.46022,-37.781715,-87.920006,-4.031681,-29.072739 +59,93.9208,166.09866,-48.326027,-111.694046,-5.682809,-36.825455 +60,71.9617,133.69165,-35.912685,-85.41209,-2.6741002,-28.698986 +61,77.29709,143.4738,-38.63179,-91.76507,-2.8898616,-30.699736 +62,100.90558,180.6842,-51.59789,-119.97491,-5.4902635,-39.506794 +63,79.093956,140.097,-40.651505,-94.05265,-4.7312365,-31.055733 +64,93.36842,167.33827,-47.70987,-111.005775,-5.0444255,-36.594864 +65,89.400475,160.87416,-45.581074,-106.277145,-4.655254,-35.05068 +66,71.18247,129.02841,-36.122963,-84.59284,-3.4643352,-28.004879 +67,94.070724,165.49329,-48.511963,-111.87364,-5.940198,-36.96388 +68,94.69907,169.47949,-48.43937,-112.59784,-5.176377,-37.072872 +69,94.52092,168.54114,-48.43717,-112.392815,-5.337618,-37.020348 +70,78.516495,144.10529,-39.572163,-93.27994,-3.3379686,-30.896975 +71,75.965195,136.26534,-38.772453,-90.30055,-4.0844836,-29.867825 +72,82.22544,145.58038,-42.27432,-97.779045,-4.9341216,-32.272003 +73,85.88904,156.38759,-43.495705,-102.06654,-3.9824226,-33.73376 +74,76.41134,135.3663,-39.267925,-90.861694,-4.565704,-30.00838 +75,72.63777,132.85725,-36.678158,-86.302795,-3.212714,-28.58673 +76,87.165695,154.8638,-44.745518,-103.65183,-5.0786123,-34.166847 +77,95.09086,167.23694,-49.044453,-113.08687,-6.0191965,-37.36917 +78,98.678246,174.89987,-50.726715,-117.35156,-5.8597326,-38.650524 +79,83.53726,149.27426,-42.739464,-99.31751,-4.6406217,-32.79148 +80,88.01864,154.69762,-45.403282,-104.67356,-5.6031585,-34.621407 +81,98.24959,174.41043,-50.477947,-116.84416,-5.75449,-38.439507 +82,91.3545,159.92653,-47.20724,-108.64343,-5.9945984,-35.977535 +83,69.42779,126.83768,-35.07397,-82.48843,-3.11376,-27.343887 +84,93.19298,161.36255,-48.451252,-110.86878,-6.588386,-36.6198 +85,82.06257,148.62521,-41.683678,-97.53424,-4.0184665,-32.213467 +86,70.19333,136.28699,-33.510227,-81.94457,-2.0913234,-28.541834 +87,73.913826,143.66867,-35.24645,-86.2297,-2.203389,-30.034256 +88,101.149536,180.15692,-51.875717,-120.283035,-5.7623706,-39.57856 +89,86.4,153.08112,-44.41113,-102.74501,-5.1515007,-33.88524 +90,85.54218,153.2783,-43.715324,-101.70182,-4.630538,-33.53017 +91,98.13822,172.53021,-50.62748,-116.71244,-6.22962,-38.562004 +92,70.57257,129.37149,-35.57549,-83.83683,-3.0496752,-27.828009 +93,78.51507,141.5761,-39.9677,-93.32313,-4.01859,-30.85107 +94,86.626,153.00754,-44.583206,-103.012764,-5.301827,-34.02862 +95,73.235,131.46855,-37.367054,-87.05551,-3.9084888,-28.783335 +96,84.06279,152.10603,-42.725967,-99.91602,-4.152734,-32.981956 +97,95.3838,169.43991,-48.98864,-113.43475,-5.55435,-37.315998 +98,81.511986,149.84026,-41.05513,-96.839584,-3.3964128,-32.0437 +99,80.42077,145.53183,-40.852016,-95.576706,-3.978455,-31.623737 +100,86.57293,151.44499,-44.745354,-102.95487,-5.7145934,-34.12189 +101,99.90212,177.32112,-51.330788,-118.809875,-5.8574,-39.084644 diff --git a/examples/other_files/ML_AI_Plugin/gradients_output1.csv b/examples/other_files/ML_AI_Plugin/gradients_output1.csv new file mode 100644 index 000000000..065accd59 --- /dev/null +++ b/examples/other_files/ML_AI_Plugin/gradients_output1.csv @@ -0,0 +1,103 @@ +,0,1,2,3,4,5 +0,39.472305,9.040555,4.4753723,-6.9339933,-8.209688,19.979877 +1,43.45687,10.065209,5.3269405,-7.343949,-9.8375025,21.94748 +2,42.4612,9.715011,4.4662127,-7.5765567,-8.311502,21.548733 +3,42.791584,9.855425,4.714327,-7.470362,-8.806517,21.687979 +4,44.6401,10.417218,4.935413,-7.5705504,-9.473156,22.63581 +5,36.397438,8.360599,4.4018893,-6.27447,-8.013013,18.380919 +6,54.400146,12.731372,0.22052374,-10.894471,-3.5871472,28.646238 +7,53.186993,12.325948,2.6140978,-10.120545,-6.5384517,27.535337 +8,44.72174,10.41258,5.2846336,-7.5261583,-9.938572,22.621532 +9,40.581646,9.766127,3.6550992,-6.6480155,-8.009327,20.750727 +10,50.48267,11.845523,5.643082,-8.442704,-10.926988,25.59616 +11,48.411217,11.208746,0.61697316,-9.760799,-3.5378053,25.401096 +12,41.30308,9.289722,3.6200376,-7.8344564,-6.7380757,21.063185 +13,49.527798,11.367269,1.7503519,-9.808548,-4.950979,25.76279 +14,52.236248,12.12045,1.4830489,-10.243815,-4.975933,27.253035 +15,54.451675,13.21458,-1.7091123,-10.949172,-1.5287027,29.313953 +16,48.378906,11.112977,2.6725047,-9.273896,-6.163404,24.980865 +17,46.076252,10.769871,5.151027,-7.770252,-9.892258,23.356447 +18,49.898212,11.727691,-0.50780636,-10.126056,-2.4215105,26.41674 +19,47.010834,10.796278,0.7635553,-9.570347,-3.492666,24.626923 +20,42.49559,9.526718,3.4709811,-8.183946,-6.5079846,21.709482 +21,36.488853,8.402436,4.6331377,-6.1944523,-8.392515,18.394592 +22,50.50336,11.863611,5.597086,-8.438517,-10.886368,25.615381 +23,46.036648,10.558985,1.3234552,-9.219156,-4.1778283,24.004683 +24,43.243565,10.066781,5.1132917,-7.278385,-9.612114,21.873014 +25,46.810127,10.981322,5.2346187,-7.829938,-10.12914,23.732462 +26,54.778755,12.745698,1.8579553,-10.594688,-5.6953225,28.52434 +27,55.157936,12.84361,1.5769585,-10.741527,-5.3534093,28.779161 +28,47.194675,10.789401,4.0933127,-8.68676,-7.970171,24.092224 +29,40.119278,9.301185,4.6460824,-6.8395753,-8.700076,20.30283 +30,44.357586,10.361733,5.1157117,-7.4451814,-9.737166,22.459202 +31,44.091103,10.279667,5.1555157,-7.411811,-9.741317,22.310564 +32,43.932293,10.295078,4.9572897,-7.3557,-9.564874,22.270784 +33,55.372597,12.8546715,5.538947,-9.669955,-10.78018,28.166338 +34,44.67639,10.085663,4.081658,-8.36841,-7.596128,22.760012 +35,41.894337,9.692111,5.1701455,-7.0861683,-9.507846,21.1496 +36,39.008205,9.025907,4.803199,-6.5972314,-8.840872,19.694183 +37,39.179035,9.090601,4.73176,-6.6115074,-8.791496,19.79658 +38,52.145424,12.001308,3.8159924,-9.676685,-7.95686,26.747736 +39,51.169132,11.825222,4.5261436,-9.190424,-9.010765,26.119528 +40,48.885994,11.384972,0.35058913,-9.832352,-3.327425,25.708477 +41,45.515488,10.379142,2.290867,-8.914262,-5.3522134,23.538649 +42,38.381416,8.895483,4.669039,-6.4826374,-8.639994,19.386353 +43,54.27679,12.486514,4.5397754,-9.915822,-9.072086,27.742592 +44,44.70476,10.185299,4.431462,-8.124614,-8.278638,22.727264 +45,45.26169,10.270279,3.0954933,-8.699065,-6.3372545,23.245663 +46,44.490883,10.3955345,5.123365,-7.466139,-9.763141,22.529284 +47,49.007565,11.467879,5.589601,-8.214153,-10.707681,24.826971 +48,48.914463,11.449015,5.5692134,-8.196939,-10.680449,24.782263 +49,51.44247,11.843476,3.2424102,-9.697767,-7.147543,26.487959 +50,40.211796,9.31276,4.923057,-6.795901,-9.087687,20.30701 +51,47.47869,11.121441,5.370778,-7.951353,-10.32982,24.059877 +52,49.846416,11.49945,2.2332566,-9.633739,-5.7357206,25.84303 +53,42.151524,9.83725,4.8910985,-7.0802836,-9.279788,21.336718 +54,51.31227,11.994683,5.56605,-8.702654,-10.771388,26.038465 +55,55.185787,14.074042,-3.233125,-10.79583,-0.28277907,30.413597 +56,53.157955,12.936906,-1.6972876,-10.630949,-1.5344597,28.618082 +57,43.886723,10.321132,4.8204412,-7.327031,-9.437515,22.273174 +58,39.478786,9.133364,4.569732,-6.7617426,-8.521194,19.977112 +59,50.40319,11.530055,3.948904,-9.388246,-7.917727,25.799711 +60,39.470055,9.17631,4.7091875,-6.6506906,-8.820755,19.959515 +61,42.563923,9.990268,4.743841,-7.117564,-9.213723,21.58846 +62,54.117134,12.682417,5.769613,-9.156787,-11.276438,27.481503 +63,42.299873,9.541887,4.1478777,-7.8527775,-7.5858192,21.502428 +64,50.01601,11.707785,5.5057044,-8.433456,-10.645937,25.369198 +65,47.97519,11.245336,5.4007373,-8.030154,-10.415238,24.316742 +66,38.228764,8.859347,4.655823,-6.4575067,-8.616787,19.310186 +67,51.2602,11.825336,2.1653807,-9.947658,-5.718777,26.601175 +68,50.792892,11.889568,5.3689785,-8.628659,-10.492522,25.7997 +69,50.65201,11.723701,5.180649,-8.867682,-9.95888,25.743046 +70,42.527298,9.831198,5.2783356,-7.1976013,-9.684543,21.465185 +71,40.544342,9.412547,4.7915397,-6.86403,-8.954495,20.50343 +72,44.019356,9.97187,4.0628405,-8.177872,-7.610444,22.422098 +73,46.370758,10.793575,5.493628,-7.8053555,-10.320461,23.453663 +74,40.837055,9.207032,4.0501304,-7.575308,-7.380402,20.750893 +75,39.243137,9.073316,4.859652,-6.641093,-8.92359,19.808954 +76,46.65086,10.663804,4.64791,-8.415959,-8.739329,23.71628 +77,51.861187,11.968793,2.0875332,-10.088004,-5.654292,26.933317 +78,52.91414,12.1656885,4.4709535,-9.665283,-8.895572,27.038025 +79,44.65596,10.317351,4.9056816,-7.7484426,-9.231716,22.638445 +80,47.907692,10.966602,2.1472218,-9.396057,-5.3526545,24.830208 +81,52.682915,12.191681,4.5312095,-9.473099,-9.123813,26.914907 +82,50.435932,11.691713,0.7460609,-10.115618,-3.8520005,26.444775 +83,37.44313,8.63946,4.696719,-6.346684,-8.566634,18.888561 +84,53.42023,13.1933565,-2.2026727,-10.65715,-1.0296772,29.010656 +85,44.169193,10.30508,5.1417913,-7.421015,-9.744524,22.356443 +86,40.298096,9.780722,3.3313665,-6.554062,-7.6855173,20.662878 +87,42.549854,10.365376,3.384757,-6.898334,-7.997465,21.843248 +88,54.237057,12.579824,5.2608213,-9.53649,-10.301966,27.614477 +89,46.266056,10.531298,4.2347126,-8.525349,-8.04393,23.577332 +90,45.818863,10.715119,5.0386147,-7.742108,-9.72779,23.240261 +91,53.680496,12.447602,1.8015527,-10.456308,-5.4750175,27.952402 +92,38.095276,8.755097,4.905712,-6.4772305,-8.831982,19.1937 +93,42.078526,9.7967205,4.9676123,-7.081549,-9.341427,21.283915 +94,46.429058,10.482101,3.9015872,-8.7937,-7.4066167,23.708412 +95,39.11415,9.105655,4.6147113,-6.583228,-8.676915,19.784132 +96,45.246586,10.574317,5.2044415,-7.5917163,-9.9256525,22.913656 +97,51.115196,11.821705,4.59248,-9.1459675,-9.120545,26.081306 +98,44.2555,10.258384,5.396894,-7.4741893,-9.993968,22.356342 +99,43.16619,10.022589,5.1971354,-7.2803106,-9.676226,21.81539 +100,47.7516,11.005234,0.8373709,-9.640132,-3.704831,25.006502 +101,53.579086,12.406268,4.549125,-9.639955,-9.207587,27.383106 diff --git a/examples/other_files/ML_AI_Plugin/mea_column_model_training_customnormform_scikitlearn.py b/examples/other_files/ML_AI_Plugin/mea_column_model_training_customnormform_scikitlearn.py index 7ff68de62..952374c71 100644 --- a/examples/other_files/ML_AI_Plugin/mea_column_model_training_customnormform_scikitlearn.py +++ b/examples/other_files/ML_AI_Plugin/mea_column_model_training_customnormform_scikitlearn.py @@ -79,7 +79,7 @@ def create_model(x_train, z_train): model_data = np.concatenate( (xdata, zdata), axis=1 -) # PyTorch requires a Numpy array as input +) # SciKit Learn requires a Numpy array as input # define x and z data, not used but will add to variable dictionary xdata = model_data[:, :-2]