This list describes the sequence of function-to-nodes instructions that are executed during the inference and update processes.
- +/tmp/ipykernel_2273/1724106460.py:6: UserWarning: The figure layout has changed to tight
+/tmp/ipykernel_2231/1724106460.py:6: UserWarning: The figure layout has changed to tight
plt.tight_layout()
@@ -1100,10 +1100,10 @@ System configuration
/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/site-packages/rich/live.py:231: UserWarning: install -"ipywidgets" for Jupyter support - warnings.warn('install "ipywidgets" for Jupyter support') -
Array(202.53, dtype=float32)
+Array(202.53107, dtype=float32)
/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/site-packages/rich/live.py:231: UserWarning: install -"ipywidgets" for Jupyter support - warnings.warn('install "ipywidgets" for Jupyter support') -
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 9 seconds.
@@ -1003,7 +995,7 @@ Sampling#
Array(203.0192, dtype=float32)
+Array(203.01193, dtype=float32)
/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/site-packages/rich/live.py:231: UserWarning: install -"ipywidgets" for Jupyter support - warnings.warn('install "ipywidgets" for Jupyter support') -
/tmp/ipykernel_3137/2516081684.py:2: UserWarning: The figure layout has changed to tight
+/tmp/ipykernel_3096/2516081684.py:2: UserWarning: The figure layout has changed to tight
plt.tight_layout()
-
+
Array(-1106.1144, dtype=float32)
+Array(-1106.1195, dtype=float32)
/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/site-packages/rich/live.py:231: UserWarning: install +"ipywidgets" for Jupyter support + warnings.warn('install "ipywidgets" for Jupyter support') +
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 8 seconds.
There were 1 divergences after tuning. Increase `target_accept` or reparameterize.
+
We recommend running at least 4 chains for robust computation of convergence diagnostics
Array(-1118.0496, dtype=float32)
+Array(-1118.0265, dtype=float32)
The results above indicate that given the responses provided by the participant, the most likely values for the parameter \(\omega_2\) are between -4.9 and -3.1, with a mean at -3.9 (you can find slightly different values if you sample different actions from the decisions function). We can consider this as an excellent estimate given the sparsity of the data, and the complexity of the model.
@@ -1017,12 +1017,12 @@Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 45 seconds.
There were 1 divergences after tuning. Increase `target_accept` or reparameterize.
+
We recommend running at least 4 chains for robust computation of convergence diagnostics
The reference values on both posterior distributions indicate the mean of the distribution used for simulation.
@@ -887,16 +890,16 @@Computed from 2000 posterior samples and 3200 observations log-likelihood matrix.
Estimate SE
-elpd_loo -1684.41 25.64
-p_loo 18.15 -
+elpd_loo -1684.38 25.64
+p_loo 18.10 -
There has been a warning during the calculation. Please check the results.
------
Pareto k diagnostic values:
Count Pct.
-(-Inf, 0.70] (good) 3185 99.5%
- (0.70, 1] (bad) 2 0.1%
+(-Inf, 0.70] (good) 3187 99.6%
+ (0.70, 1] (bad) 0 0.0%
(1, Inf) (very bad) 13 0.4%
/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/site-packages/rich/live.py:231: UserWarning: install +"ipywidgets" for Jupyter support + warnings.warn('install "ipywidgets" for Jupyter support') +
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 70 seconds.
+
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 54 seconds.
There were 2000 divergences after tuning. Increase `target_accept` or reparameterize.
@@ -760,7 +764,7 @@ Visualizing parameters recovery
-
+
Downloading ECG channel: 0%| | 0/2 [00:00<?, ?it/s]
Downloading ECG channel: 50%|█████ | 1/2 [00:01<00:01, 1.03s/it]
+Downloading ECG channel: 50%|█████ | 1/2 [00:00<00:00, 1.15it/s]
-Downloading Respiration channel: 50%|█████ | 1/2 [00:01<00:01, 1.03s/it]
+Downloading Respiration channel: 50%|█████ | 1/2 [00:00<00:00, 1.15it/s]
-Downloading Respiration channel: 100%|██████████| 2/2 [00:01<00:00, 1.02it/s]
+Downloading Respiration channel: 100%|██████████| 2/2 [00:01<00:00, 1.35it/s]
-Downloading Respiration channel: 100%|██████████| 2/2 [00:01<00:00, 1.01it/s]
+Downloading Respiration channel: 100%|██████████| 2/2 [00:01<00:00, 1.31it/s]
@@ -693,13 +693,9 @@ Model#<
"ipywidgets" for Jupyter support
warnings.warn('install "ipywidgets" for Jupyter support')
-/opt/hostedtoolcache/Python/3.12.5/x64/lib/python3.12/site-packages/rich/live.py:231: UserWarning: install
-"ipywidgets" for Jupyter support
- warnings.warn('install "ipywidgets" for Jupyter support')
-
-Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 9 seconds.
+
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 10 seconds.
We recommend running at least 4 chains for robust computation of convergence diagnostics
@@ -714,7 +710,7 @@ Model#<
-
+
@@ -746,7 +742,7 @@ Model#<
-
+
@@ -771,12 +767,12 @@ System configuration
diff --git a/dev/notebooks/Example_2_Input_node_volatility_coupling.html b/dev/notebooks/Example_2_Input_node_volatility_coupling.html
index fed5c8649..909a33db4 100644
--- a/dev/notebooks/Example_2_Input_node_volatility_coupling.html
+++ b/dev/notebooks/Example_2_Input_node_volatility_coupling.html
@@ -698,11 +698,11 @@ System configuration
diff --git a/dev/notebooks/Example_3_Multi_armed_bandit.html b/dev/notebooks/Example_3_Multi_armed_bandit.html
index c0ce6a6cc..e0b2969d6 100644
--- a/dev/notebooks/Example_3_Multi_armed_bandit.html
+++ b/dev/notebooks/Example_3_Multi_armed_bandit.html
@@ -1137,7 +1137,7 @@ Bayesian inference
-
+
@@ -1162,14 +1162,14 @@ System configurationSystem configuration
diff --git a/dev/notebooks/Exercise_2_Bayesian_reinforcement_learning.html b/dev/notebooks/Exercise_2_Bayesian_reinforcement_learning.html
index a6cc3b571..40b6beeb9 100644
--- a/dev/notebooks/Exercise_2_Bayesian_reinforcement_learning.html
+++ b/dev/notebooks/Exercise_2_Bayesian_reinforcement_learning.html
@@ -729,7 +729,7 @@ Parameters optimization
Assess model fitting, here using leave-one-out cross-validation from the Arviz library.
@@ -924,7 +924,7 @@Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 21 seconds.
+
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 23 seconds.
We recommend running at least 4 chains for robust computation of convergence diagnostics
@@ -939,7 +939,7 @@ Rescorla-Wagner
-
+
We have saved the pointwise log probabilities as a variable, here we simply move this variable to the log_likelihoo field of the idata
object, so Arviz knows that this can be used later for model comparison.
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 7 seconds.
There were 60 divergences after tuning. Increase `target_accept` or reparameterize.
+There were 58 divergences after tuning. Increase `target_accept` or reparameterize.
We recommend running at least 4 chains for robust computation of convergence diagnostics
@@ -1193,7 +1193,7 @@ Three-level HGF
-
+
Move pointwise estimate into the likelihood field.
@@ -1266,11 +1266,11 @@ Model comparison
-
+
Looking at the final result, we can see that the three-level HGF had the best predictive performance on the participant decision, suggesting that higher-level uncertainty is important here to understand the agent’s behaviour.
@@ -1439,7 +1439,7 @@The resulting samples show belief trajectories for 10 samples for each model (we are not depicting the biased random here for clarity). The trajectories are highly similar, but we can see that the two and three-level HGF are slightly adjusting their learning rates in a way that was more consistent with the observed behaviours.
@@ -1483,14 +1483,14 @@