Skip to content

Commit

Permalink
fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
LegrandNico committed Sep 13, 2024
1 parent bfce6c9 commit 310e42c
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 22 deletions.
1 change: 1 addition & 0 deletions docs/source/learn.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ notebooks/1.3-Continuous_HGF.ipynb
notebooks/2-Using_custom_response_functions.ipynb
notebooks/3-Multilevel_HGF.ipynb
notebooks/4-Parameter_recovery.ipynb
notebooks/5-Non_linear_value_coupling
```

```{toctree}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@
"where $\\sigma^2$ is the fixed variance of the distribution. You can find more details on this as well as some code to get started with the simulations in the first tutorial on the [PyHGF documentation](https://ilabcode.github.io/pyhgf/notebooks/0.1-Theory.html#).\n",
"\n",
"```{exercise}\n",
":label: exercise1\n",
":label: exercise1.1\n",
"\n",
"Using the equation above, write a Python code that implements a Gaussian random walk using the following parameters: $\\sigma^2 = 1$ and $x_1^{(0)} = 0$.\n",
"```"
Expand Down Expand Up @@ -238,7 +238,7 @@
},
"source": [
"```{exercise}\n",
":label: exercise2\n",
":label: exercise1.2\n",
"\n",
"Write a Python code that generates values for $x_1$ using the value coupling equation above with $\\mu_1^{0} = 0.0$ and $\\sigma_1 = 1.5$.\n",
"```"
Expand Down Expand Up @@ -306,7 +306,7 @@
},
"source": [
"```{exercise}\n",
":label: exercise3\n",
":label: exercise1.3\n",
"\n",
"- Using the equation above and your previous implementation, write a Python code that generates values for $x_1$ with $\\omega_1 = -6.0$, $\\mu_1^{(0)} = 0.0$,\n",
"\n",
Expand Down Expand Up @@ -527,7 +527,7 @@
},
"source": [
"```{exercise}\n",
":label: exercise4\n",
":label: exercise1.4\n",
"\n",
"Each state node comes with parameters `mean`, `precision` and `tonic_volatility` that can be provided when creating the network. Using the function above, try to change these values. How does this influence the belief trajectories? \n",
"\n",
Expand Down Expand Up @@ -809,7 +809,7 @@
},
"source": [
"```{exercise}\n",
":label: exercise4\n",
":label: exercise1.5\n",
"\n",
"What quantity are we measuring in the code cell above? What does this represent?\n",
"```"
Expand All @@ -827,7 +827,7 @@
},
"source": [
"```{exercise}\n",
":label: exercise5\n",
":label: exercise1.6\n",
"\n",
"- Select a city and download a recording OR use the data frame loaded above.\n",
"- Fit a network using one of the variables and estimate the total Gaussian surprise.\n",
Expand All @@ -848,8 +848,8 @@
"source": [
"# Solutions\n",
"\n",
"````{solution} exercise1\n",
":label: solution-exercise1\n",
"````{solution} exercise1.1\n",
":label: solution-exercise1.1\n",
"\n",
"We can simulate a simple Gaussian Random Walk in Python either using a for loop and a list:\n",
"\n",
Expand Down Expand Up @@ -903,8 +903,8 @@
"tags": []
},
"source": [
"````{solution} exercise2\n",
":label: solution-exercise2\n",
"````{solution} exercise1.2\n",
":label: solution-exercise1.2\n",
"\n",
"We can simulate values from $x_1$ using a for loop:\n",
"\n",
Expand Down Expand Up @@ -933,8 +933,8 @@
"tags": []
},
"source": [
"````{solution} exercise3\n",
":label: solution-exercise3\n",
"````{solution} exercise1.3\n",
":label: solution-exercise1.3\n",
"\n",
"We can simulate values from $x_1$ using a for loop:\n",
"\n",
Expand Down Expand Up @@ -963,8 +963,8 @@
"tags": []
},
"source": [
"````{solution} exercise4\n",
":label: solution-exercise4\n",
"````{solution} exercise1.4\n",
":label: solution-exercise1.4\n",
"\n",
"The code below will change mean, precision and tonic volatility at the first level. Here we use Python's OOP features to express model creation, input data and plotting by chaining method call.\n",
"\n",
Expand Down Expand Up @@ -995,8 +995,8 @@
"tags": []
},
"source": [
"````{solution} exercise4\n",
":label: solution-exercise4\n",
"````{solution} exercise1.5\n",
":label: solution-exercise1.5\n",
"\n",
"This method return the Gaussian surprise at each time point, which are then summed. The sum of the Gaussian surprise reflect the performances of the model at predicting the next value, larger values pointing to a model more surprise by its inputs, therefore poor performances. The surprise is define as $s = -log(p)$, this is thus the negative of the log probability function. Log probability functions are commonly used by sampling algorithm, it is thus straigthforwars to sample a model parameters when this function is available. There are an infinity of response possible functions - just like there is an infinity of possible networks - for more details on how to use tem, you can refer to the [tutorial on custom response models](https://ilabcode.github.io/pyhgf/notebooks/2-Using_custom_response_functions.html).\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@
"metadata": {},
"source": [
"```{exercise}\n",
":label: exercise1\n",
":label: exercise2.1\n",
"\n",
"Create a two-level binary Hierarchical Gaussian Filter using the `Network` class and plot the network structure.\n",
"\n",
Expand Down Expand Up @@ -364,7 +364,7 @@
},
"source": [
"```{exercise}\n",
":label: exercise2\n",
":label: exercise2.2\n",
"\n",
"- Using the examples above, can you diagnose the performances of the agent?\n",
"- What could make it better? Can you try to change the parameters and get an agent with better performances (i.e. minimizing the surprise)?\n",
Expand Down Expand Up @@ -2228,8 +2228,8 @@
"tags": []
},
"source": [
"````{solution} exercise1\n",
":label: solution-exercise1\n",
"````{solution} exercise2.1\n",
":label: solution-exercise2.1\n",
"\n",
"Here's how we can create the network.\n",
"\n",
Expand All @@ -2256,8 +2256,8 @@
"id": "6fbdf386-16a3-44af-b015-60cd887fa997",
"metadata": {},
"source": [
"````{solution} exercise2\n",
":label: solution-exercise2\n",
"````{solution} exercise2.2\n",
":label: solution-exercise2.2\n",
"\n",
"- Looking at the trajectories, both at the first and the second level (remember that the binary state node and the continuous state node are encoding the sa values using different scales), we can see that the agent is rather slow at changing its mind when the observed associations between the stimuli and the outcomes are switching. \n",
"\n",
Expand Down

0 comments on commit 310e42c

Please sign in to comment.