From ca6f4a58994438b11dba5c8ee8595184bc15832e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Rapha=C3=ABl?= <23459708+Rapfff@users.noreply.github.com> Date: Wed, 5 Oct 2022 14:33:52 +0000 Subject: [PATCH] Update tuto.rst --- docs/source/tuto.rst | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/source/tuto.rst b/docs/source/tuto.rst index a891575..ee43674 100644 --- a/docs/source/tuto.rst +++ b/docs/source/tuto.rst @@ -75,10 +75,11 @@ Let now use our training set to learn ``original_model`` with the Baum-Welch alg .. code-block:: python - output_model = ja.BW_HMM().fit(training_set, nb_states=5) + output_model = ja.BW_HMM().fit(training_set, nb_states=5, stormpy_output=False) print(output_model) -For the initial model we used a randomly generated HMM with 5 states. +For the initial model we used a randomly generated HMM with 5 states. Since we are not planning to use Storm on this model, +we set the `stormpy_output` parameter to False. Evaluating the BW output model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -160,7 +161,7 @@ At each iteration, the library will generate a new model with 7 states. >>> best_model = None >>> quality_best = -1024 >>> for n in range(1,nb_trials+1): - ... current_model = ja.BW_MC().fit(training_set,nb_states=7,pp=n) + ... current_model = ja.BW_MC().fit(training_set,nb_states=7,pp=n, stormpy_output=False) ... current_quality = current_model.logLikelihood(test_set) ... if quality_best < current_quality: #we keep the best model only ... quality_best = current_quality @@ -324,7 +325,8 @@ scheduler with probability 0.25. learning_rate = 0 output_model = ja.Active_BW_MDP().fit(training_set,learning_rate, nb_iterations=20, nb_sequences=50, - epsilon_greedy=0.75, nb_states=9) + epsilon_greedy=0.75, nb_states=9, + stormpy_output=False) output_quality = output_model.logLikelihood(test_set) print(output_model)