- added f1 score to list of output statistics
- changed results output SD on the same row as param
- update model patience config option
- added optimal acceptance time recording to results
- refactored run.py
- add
ones
init type toWeight
- remove watermark dependencies
- make watermark package optional
- changed precision to 4
- fix incorrect text display for model.config
- added assert checks for incorrect choice set numbering in dataset
- fix argument name inconsistency
- functions.py: added
gelu
function
- basic.py: fix bug in
null_log_likelihood_fn
for givens - dataset.py: fixed slice call when no
batch_size
argument is used - dataset.py: fixed the
split
method to acceptNone
as argument
- functions.py: refactored method to covert list of utilities to tensor variables
- dataset.py: accept mixed
str
andTensorVariable
types in_make_tensor
- basic.py: moved
build_gh_fn()
toBaseModel
- run: added
is_training
flag
- pyproject.toml: fix incorrect bump message
- added
null_log_likelihood_fn
to basic.py - results: add argument for max cutoff in
show_training_plot
- results.py: fixed legend placement on training plot
- run.py: add option for acceptance_method argument
- minor refactorization
- layers.py: added new layer type
LayerNorm
- expression.py: added new parameter type Gamma
- results.py: add argument offset to show_training_plot
- run: moved train and compute to pycmtensor.run for cleaniness
- TasteNet.py: changed instance check type from .DenseLayer to .Layer
- run.py: add limit for verbosity
- dataset: added split via n-count and added option for pre-shuffle
- dataset: updated scale_variable method to handle both single variables and lists of variables
- workflow: remove poetry from tests
- add missing poetry dependencies
- overhaul dependencies for conda feedstock
- basic: add lr_scheduler to
compute()
function
- clean up and enhance code with codiumAI
- utils: add new function to display numbers as human readable format
- results: add new function to plot training statistics
- scheduler: fix missing arguments
- basic: renamed variable to avoid confusion
- scheduler: fixed calculation erro in cyclic lr
- condition for stdout
- new option for selection of model acceptance pattern
- calculating distributed Beta now uses percentile instead of mean
- models: add new func
compute
to run the model on a manual set of params - regularizers: new method
include_regularization_terms()
in BaseModel for adding regularizers
- dependencies: prevent breakage of numpy 1.26
- import compute from pycmtensor.models
- fix l2 regularizer formula
- set init_types as a module level list and add tests
- add minmax clipping on neural net layer outputs
- temporary function for negative relu
- expression fix for random draw variables
- relax the condition for stdout during training
- config: refactor config.add into .add() and .update() methods
- changed basic model functions into staticmethod
- expressions.py: make ExpressionParser.parse a staticmethod
- remove depreciated modules
- expressions.py: use predefined draw types for random draws
- basic.py: include choice label(s) as a dictionary for
predict()
- moved elasticities from statistics to model
- scheduler.py: add learning rate lower bounds for decaying functions
- basic.py: add placeholder arguments
*args
- basic.py: improve efficency of hessian matrix calculation over the sum of log likelihood over observation
- basic.py: refactoring common model functions into BaseModel
- syntax and naming changes
- optimizers.py: include new optimizer RProp
- functions.py: speed up computation and compilation by using static indexing in
log_likelihood
function - functions.py: add
relu()
function (source taken from Theano 0.7.1) - basic.py: new function include_params_for_convergence
- dataset.py: use
as_tensor_variable
to construct tensor vector from dataset[[item1, item2,...]] - dataset.py: added list(
str
) as tensor arguments fortrain_dataset()
andvalid_dataset()
- init.py: fix init circular imports
- statistics.py: update t_test and varcovar matrix calculations for vector parameters
- layers.py: various fixes to neural net layers
- optimizers.py: fix SQNBFGS algorithm
- functions.py: include output as params for 1st and 2nd order derivatives
- expressions.py: fix base class inheritence and use
config.seed
for seed value - expressions.py: include
self
in overloaded operators ifother
instance is of similar type asself
- basic.py: fix incorrect saved params in
train()
- expressions.py: clip lower and upper bounds when updating
Betas
- expressions.py: fixed
Weights
andBias
mathematical operations - config.py: renamed config.py to defaultconfig.py to avoid name conflicts
- results.py: updates results calculations for beta vectors
- expressions.py: add set_value in
Param
base class - optimizers.py: removed unused imports
- optimizers.py: added new BFGS algorithm
- dataset.py: moved dataset initialization to dataset.py
- pycmtensor.py: Implemented early stopping on coefficient convergence in training loop
- functions.py: logit method now takes uneven dimensioned-utilities
- expression.py: Added RandomDraws expression for sampling in mixed logit
- get_train_data optional argument numpy_out to return numpy arrays rather than pandas arrays
- BHHH algorithm for calculating var-covar matrix applies to each data row
- results.py: fixed instance when params are not Betas
- expressions.py: include RandomDraws as an option in expression evaluations
- Update tests workflow file conda packages
- pycmtensor.py: Added missing configuration for stepLR
drop_every
- tests.yml: Update tests workflow file conda packages
- optimizers.py: Fixed name typo in
__all__
- results.py: Corrected calculation of hessian and bhhh matrices
- scheduler.py: Moved class function calls to parent class
- statistics.py: Fixed rob varcovar calculation error
- MNL.py: Moved aesara function to parent class
- data.py: Streamlined class function calls and removed unnecessary code
- removed package import clashes with config.py
- removed gnorm calculation
- update hessian matrix and bhhh algorithm functions
- pycmtensor.py: temporarily removed pycmtensor.py
- MNL.py: replaced function constructors from pycmtensor.py as a function call inside the model Class object
- basic.py: moved model functionality to from pycmtensor.py to models/basic.py
- utils.py: Removed unused code
- make arguments in
MNL
as optional keyword arguments - moved learning rate variable to
PyCMTensorModel
class
- make model variables as property
- update
__all__
package variables - added
train_data
andvalid_data
property toData
class
- fix utility dimensions for asc only cases
- optimizers: added
Nadam
optimizer - layers.py: added
DenseLayer
BatchNormLayer
ResidualLayer
- added
pycmtensor.about()
to output package metadata - added EMA function
functions.exp_mov_average()
- renamed depreceated instances of
aesara
modules - data.py: defaults
batch_size
argument to 0 if batch_size isNone
- updated syntax for
expressions.py
class objects - added
init_type
property toWeights
class - moved model aesara compile functions from
models.MNL
topycmtensor.PyCMTensorModel
- added argument type hints in function.py
- data: added import dataset cleaning step as arguments in
Data()
- moved ResidualLayer to
pycmtensor.models.layers
- updated timing to perf_counter
- pycmtensor: refactoring model_loglikelihood
- added
pycmtensor.about()
to output package metadata - added EMA function
functions.exp_mov_average()
- updated syntax for
expressions.py
class objects - added
init_type
property toWeights
class - moved model aesara compile functions from
models.MNL
topycmtensor.PyCMTensorModel
- expressions: added Weights class object (#59)
- functions: added rmse and mae objective functions (#58)
- batch shuffle for training
- function: added KL divergence loss function (#50)
- added expand_dims into logit function
- replace class function Beta.Beta with Beta.beta
- removed flatten() from logit function
- scheduler: added learning rate scheduling to train()
- code: overhaul and cleanup
- environment: update project deps and pre-commit routine
- config: remove unnecessary cxx flags from macos builds
- config: misc optimization changes
- config: added optimizing speedups to config
- config: set default
cyclic_lr_mode
andcyclic_lr_step_size
toNone
- pre-commit-config: update black to
22.6.0
in pre-commit check
- models: refactored build_functions() into models.py
- database: refactor set_choice(choiceVar)
- tests: removed depreciated tests
- routine: remove depreciated tqdm module
- pycmtensor.py: update training method
- config.py: new config option verbosity: "high", "low"
- pycmtensor.py: remove warnings for max_iter<patience
- scheduler: fix missing args in input parameters
- scheduler: fix constantLR missing input paramerer
- python: update to python 3.10
- tests: update tests files to reflect changes in biogeme removal
- deps: remove Biogeme dependencies
- expressions: remove Biogeme dependencies
- database: remove dependencies of Biogeme
- debug: remove debug handler after each run to prevent duplication
- models: add function to return layer output -> get_layer_outputs()
- debug: disables tqdm if debug mode is on and activates debug_log
- move elasticites from models to statistics for consistency
- models: add functionality to compute elasticities of choice vs attribute in models.py
- results: remove unnessary
show_weights
option in Results - set default max_epoch on training run to adaptive rule
- print valid config options when invalid options are given as args to train()
- scheduler: modified cyclic_lr config loading sequence to fix unboundError
- train: turn saving model off for now
- config: generate os dependent ld_flags
- utils: refactored save_to_pickle and disables it
- IterationTracker: use numpy array to store iteration data
- models: Implement the ResLogit layer
- config: set default learning schedule to ConstantLR
- config: set default seed to a random number on init
- scheduler.py: add new scheduler (CyclicLR) for adaptive LR
- project: fix project metadata and ci
- config: loadout config from train() to configparser
- utils: fix TypeError check
- config: add PyCMTensorConfig class to store config settings
- expressions: add magic methods lt le gt le ne eq
- config.py: enable pre-writing of .aesararc config file on module load
- models: add method prob() to MNLogit to output prob slices
- time_format: enable logging of build and estimation time
- results: add Predict class to output probs or discrete choices
- optimizers: add AdaGram algorithm
- Database: add getattr build-in type to Database
- pycmtensor.py: add model.output_choices to generate choices
- statistics: add small value to stderror calculation to address sqrt(0)
- dependencies: move ipywidgets and pydot to dependencies
- renamed .rst to .md fix FileNotFoundError
- result: print more verbose results and options
- Database: add name to shared_data
- train: model instance now load initiated model class (not input Class as argument)
- Database: set choiceVar to mandatory argument
- PyCMTensor: rename append_to_params to add_params for consistency
- PyCMTensor: new method to add regularizers to cost function
- Expressions: invokes different operator for Beta Beta maths
- show excluded data in model est. output
- results: standardized naming conventions in modules db->database
- tqdm: add arg in train() to enable notebook progressbar
- swissmetro_test.ipynb: update swissmetro example
- PyCMTensor: refactoring models from pycmtensor.py
- Database: refactor(Database): refactoring database.py from pycmtensor.py
- optimizers: refactor base Optimizer class
- moved Beta Weights to expressions.py
- shared_data: improve iteration speed by implementing shared() on input data