You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Today I used Sippy and I was very pleased.
It is very lean and it just do what is supposed to do without too many bells & whistles.
THIS is the way I love software! Very good job!
Nevertheless, I would suggest to give the user the possibility of selecting a validation dataset and to compute at least the basic metrics for validating the identified model, namely:
Model fit in percentage
Residuals auto-correlation analysis result
Input-residual cross-correlation analysis result
A sketch of the code could be something like the following (at least for the first point I mentioned)
# Residuals
eps = Y_val-y_hat.transpose() # Y_val is the output data used for validation, y_hat is the identified model output and eps is the residual
Y_val_bar = np.mean(Y_val, axis=0)
FIT_perc = np.round((1.0-np.linalg.norm(eps)/np.linalg.norm(Y_val-Y_val_bar))*100, 3)
print(FIT_perc)
Second suggestions is to set an option to get the system matrices in canonical observable form.
This is very handy when the outputs match with the actual sensors.
The text was updated successfully, but these errors were encountered:
Today I used Sippy and I was very pleased.
It is very lean and it just do what is supposed to do without too many bells & whistles.
THIS is the way I love software! Very good job!
Nevertheless, I would suggest to give the user the possibility of selecting a validation dataset and to compute at least the basic metrics for validating the identified model, namely:
A sketch of the code could be something like the following (at least for the first point I mentioned)
Second suggestions is to set an option to get the system matrices in canonical observable form.
This is very handy when the outputs match with the actual sensors.
The text was updated successfully, but these errors were encountered: