Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ar1_processes] to [eigen_II] [heavy_tails][inequality]Example and spelling #538

Merged
merged 18 commits into from
Aug 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 9 additions & 2 deletions lectures/ar1_processes.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,9 @@ where $a, b, c$ are scalar-valued parameters

(Equation {eq}`can_ar1` is sometimes called a **stochastic difference equation**.)

```{prf:example}
:label: ar1_ex_ar

For example, $X_t$ might be

* the log of labor income for a given household, or
Expand All @@ -70,6 +73,7 @@ of the previous value and an IID shock $W_{t+1}$.

(We use $t+1$ for the subscript of $W_{t+1}$ because this random variable is not
observed at time $t$.)
```

The specification {eq}`can_ar1` generates a time series $\{ X_t\}$ as soon as we
specify an initial condition $X_0$.
Expand Down Expand Up @@ -330,7 +334,10 @@ Notes:
* In {eq}`ar1_ergo`, convergence holds with probability one.
* The textbook by {cite}`MeynTweedie2009` is a classic reference on ergodicity.

For example, if we consider the identity function $h(x) = x$, we get
```{prf:example}
:label: ar1_ex_id

If we consider the identity function $h(x) = x$, we get

$$
\frac{1}{m} \sum_{t = 1}^m X_t \to
Expand All @@ -339,7 +346,7 @@ $$
$$

In other words, the time series sample mean converges to the mean of the stationary distribution.

```

Ergodicity is important for a range of reasons.

Expand Down
6 changes: 3 additions & 3 deletions lectures/cagan_ree.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ kernelspec:

We'll use linear algebra first to explain and then do some experiments with a "monetarist theory of price levels".

Economists call it a "monetary" or "monetarist" theory of price levels because effects on price levels occur via a central banks's decisions to print money supply.
Economists call it a "monetary" or "monetarist" theory of price levels because effects on price levels occur via a central bank's decisions to print money supply.

* a goverment's fiscal policies determine whether its _expenditures_ exceed its _tax collections_
* if its expenditures exceed its tax collections, the government can instruct the central bank to cover the difference by _printing money_
Expand All @@ -27,7 +27,7 @@ Economists call it a "monetary" or "monetarist" theory of price levels because e
Such a theory of price levels was described by Thomas Sargent and Neil Wallace in chapter 5 of
{cite}`sargent2013rational`, which reprints a 1981 Federal Reserve Bank of Minneapolis article entitled "Unpleasant Monetarist Arithmetic".

Sometimes this theory is also called a "fiscal theory of price levels" to emphasize the importance of fisal deficits in shaping changes in the money supply.
Sometimes this theory is also called a "fiscal theory of price levels" to emphasize the importance of fiscal deficits in shaping changes in the money supply.

The theory has been extended, criticized, and applied by John Cochrane {cite}`cochrane2023fiscal`.

Expand All @@ -41,7 +41,7 @@ persistent inflation.

The "monetarist" or "fiscal theory of price levels" asserts that

* to _start_ a persistent inflation the government beings persistently to run a money-financed government deficit
* to _start_ a persistent inflation the government begins persistently to run a money-financed government deficit

* to _stop_ a persistent inflation the government stops persistently running a money-financed government deficit

Expand Down
4 changes: 4 additions & 0 deletions lectures/complex_and_trig.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,12 +103,16 @@ from sympy import (Symbol, symbols, Eq, nsolve, sqrt, cos, sin, simplify,

### An Example

```{prf:example}
:label: ct_ex_com

Consider the complex number $z = 1 + \sqrt{3} i$.

For $z = 1 + \sqrt{3} i$, $x = 1$, $y = \sqrt{3}$.

It follows that $r = 2$ and
$\theta = \tan^{-1}(\sqrt{3}) = \frac{\pi}{3} = 60^o$.
```

Let's use Python to plot the trigonometric form of the complex number
$z = 1 + \sqrt{3} i$.
Expand Down
28 changes: 14 additions & 14 deletions lectures/cons_smooth.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ In this lecture, we'll study a famous model of the "consumption function" that M

In this lecture, we'll study what is often called the "consumption-smoothing model" using matrix multiplication and matrix inversion, the same tools that we used in this QuantEcon lecture {doc}`present values <pv>`.

Formulas presented in {doc}`present value formulas<pv>` are at the core of the consumption smoothing model because we shall use them to define a consumer's "human wealth".
Formulas presented in {doc}`present value formulas<pv>` are at the core of the consumption-smoothing model because we shall use them to define a consumer's "human wealth".

The key idea that inspired Milton Friedman was that a person's non-financial income, i.e., his or
her wages from working, could be viewed as a dividend stream from that person's ''human capital''
Expand All @@ -39,7 +39,7 @@ It will take a while for a "present value" or asset price explicilty to appear i

## Analysis

As usual, we'll start with by importing some Python modules.
As usual, we'll start by importing some Python modules.

```{code-cell} ipython3
import numpy as np
Expand Down Expand Up @@ -128,7 +128,7 @@ Indeed, we shall see that when $\beta R = 1$ (a condition assumed by Milton Frie

By **smoother** we mean as close as possible to being constant over time.

The preference for smooth consumption paths that is built into the model gives it the name "consumption smoothing model".
The preference for smooth consumption paths that is built into the model gives it the name "consumption-smoothing model".

Let's dive in and do some calculations that will help us understand how the model works.

Expand Down Expand Up @@ -176,7 +176,7 @@ $$
\sum_{t=0}^T R^{-t} c_t = a_0 + h_0.
$$ (eq:budget_intertemp)

Equation {eq}`eq:budget_intertemp` says that the present value of the consumption stream equals the sum of finanical and non-financial (or human) wealth.
Equation {eq}`eq:budget_intertemp` says that the present value of the consumption stream equals the sum of financial and non-financial (or human) wealth.

Robert Hall {cite}`Hall1978` showed that when $\beta R = 1$, a condition Milton Friedman had also assumed, it is "optimal" for a consumer to smooth consumption by setting

Expand All @@ -196,7 +196,7 @@ $$ (eq:conssmoothing)
Equation {eq}`eq:conssmoothing` is the consumption-smoothing model in a nutshell.


## Mechanics of Consumption smoothing model
## Mechanics of consumption-smoothing model

As promised, we'll provide step-by-step instructions on how to use linear algebra, readily implemented in Python, to compute all objects in play in the consumption-smoothing model.

Expand Down Expand Up @@ -338,14 +338,14 @@ print('Welfare:', welfare(cs_model, c_seq))

### Experiments

In this section we decribe how a consumption sequence would optimally respond to different sequences sequences of non-financial income.
In this section we describe how a consumption sequence would optimally respond to different sequences sequences of non-financial income.

First we create a function `plot_cs` that generate graphs for different instances of the consumption smoothing model `cs_model`.
First we create a function `plot_cs` that generates graphs for different instances of the consumption-smoothing model `cs_model`.

This will help us avoid rewriting code to plot outcomes for different non-financial income sequences.

```{code-cell} ipython3
def plot_cs(model, # consumption smoothing model
def plot_cs(model, # consumption-smoothing model
a0, # initial financial wealth
y_seq # non-financial income process
):
Expand All @@ -368,7 +368,7 @@ def plot_cs(model, # consumption smoothing model
plt.show()
```

In the experiments below, please study how consumption and financial asset sequences vary accross different sequences for non-financial income.
In the experiments below, please study how consumption and financial asset sequences vary across different sequences for non-financial income.

#### Experiment 1: one-time gain/loss

Expand Down Expand Up @@ -602,7 +602,7 @@ First, we define the welfare with respect to $\xi_1$ and $\phi$
def welfare_rel(ξ1, ϕ):
"""
Compute welfare of variation sequence
for given ϕ, ξ1 with a consumption smoothing model
for given ϕ, ξ1 with a consumption-smoothing model
"""

cvar_seq = compute_variation(cs_model, ξ1=ξ1,
Expand Down Expand Up @@ -661,13 +661,13 @@ QuantEcon lecture {doc}`geometric series <geom_series>`.
In particular, it **lowers** the government expenditure multiplier relative to one implied by
the original Keynesian consumption function presented in {doc}`geometric series <geom_series>`.

Friedman's work opened the door to an enlighening literature on the aggregate consumption function and associated government expenditure multipliers that
Friedman's work opened the door to an enlightening literature on the aggregate consumption function and associated government expenditure multipliers that
remains active today.


## Appendix: solving difference equations with linear algebra

In the preceding sections we have used linear algebra to solve a consumption smoothing model.
In the preceding sections we have used linear algebra to solve a consumption-smoothing model.

The same tools from linear algebra -- matrix multiplication and matrix inversion -- can be used to study many other dynamic models.

Expand Down Expand Up @@ -749,7 +749,7 @@ is the inverse of $A$ and check that $A A^{-1} = I$

```

### Second order difference equation
### Second-order difference equation

A second-order linear difference equation for $\{y_t\}_{t=0}^T$ is

Expand Down Expand Up @@ -783,6 +783,6 @@ Multiplying both sides by inverse of the matrix on the left again provides the
```{exercise}
:label: consmooth_ex2

As an exercise, we ask you to represent and solve a **third order linear difference equation**.
As an exercise, we ask you to represent and solve a **third-order linear difference equation**.
How many initial conditions must you specify?
```
4 changes: 3 additions & 1 deletion lectures/eigen_I.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,8 @@ itself.
This means $A$ is an $n \times n$ matrix that maps (or "transforms") a vector
$x$ in $\mathbb{R}^n$ to a new vector $y=Ax$ also in $\mathbb{R}^n$.

Here's one example:
```{prf:example}
:label: eigen1_ex_sq

$$
\begin{bmatrix}
Expand Down Expand Up @@ -116,6 +117,7 @@ $$

transforms the vector $x = \begin{bmatrix} 1 \\ 3 \end{bmatrix}$ to the vector
$y = \begin{bmatrix} 5 \\ 2 \end{bmatrix}$.
```

Let's visualize this using Python:

Expand Down
30 changes: 19 additions & 11 deletions lectures/eigen_II.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie

In this lecture we will begin with the foundational concepts in spectral theory.

Then we will explore the Perron-Frobenius Theorem and connect it to applications in Markov chains and networks.
Then we will explore the Perron-Frobenius theorem and connect it to applications in Markov chains and networks.

We will use the following imports:

Expand Down Expand Up @@ -64,6 +64,9 @@ An $n \times n$ nonnegative matrix $A$ is called irreducible if $A + A^2 + A^3 +

In other words, for each $i,j$ with $1 \leq i, j \leq n$, there exists a $k \geq 0$ such that $a^{k}_{ij} > 0$.

```{prf:example}
:label: eigen2_ex_irr

Here are some examples to illustrate this further:

$$
Expand Down Expand Up @@ -94,6 +97,7 @@ $$

$C$ is not irreducible since $C^k = C$ for all $k \geq 0$ and thus
$c^{k}_{12},c^{k}_{21} = 0$ for all $k \geq 0$.
```

### Left eigenvectors

Expand Down Expand Up @@ -159,7 +163,7 @@ This is a more common expression and where the name left eigenvectors originates
For a square nonnegative matrix $A$, the behavior of $A^k$ as $k \to \infty$ is controlled by the eigenvalue with the largest
absolute value, often called the **dominant eigenvalue**.

For any such matrix $A$, the Perron-Frobenius Theorem characterizes certain
For any such matrix $A$, the Perron-Frobenius theorem characterizes certain
properties of the dominant eigenvalue and its corresponding eigenvector.

```{prf:Theorem} Perron-Frobenius Theorem
Expand Down Expand Up @@ -188,7 +192,7 @@ Let's build our intuition for the theorem using a simple example we have seen [b

Now let's consider examples for each case.

#### Example: Irreducible matrix
#### Example: irreducible matrix

Consider the following irreducible matrix $A$:

Expand All @@ -204,7 +208,7 @@ We can compute the dominant eigenvalue and the corresponding eigenvector
eig(A)
```

Now we can see the claims of the Perron-Frobenius Theorem holds for the irreducible matrix $A$:
Now we can see the claims of the Perron-Frobenius theorem holds for the irreducible matrix $A$:

1. The dominant eigenvalue is real-valued and non-negative.
2. All other eigenvalues have absolute values less than or equal to the dominant eigenvalue.
Expand All @@ -223,6 +227,9 @@ Let $A$ be a square nonnegative matrix and let $A^k$ be the $k^{th}$ power of $A

A matrix is called **primitive** if there exists a $k \in \mathbb{N}$ such that $A^k$ is everywhere positive.

```{prf:example}
:label: eigen2_ex_prim

Recall the examples given in irreducible matrices:

$$
Expand All @@ -244,10 +251,11 @@ B^2 = \begin{bmatrix} 1 & 0 \\
$$

$B$ is irreducible but not primitive since there are always zeros in either principal diagonal or secondary diagonal.
```

We can see that if a matrix is primitive, then it implies the matrix is irreducible but not vice versa.

Now let's step back to the primitive matrices part of the Perron-Frobenius Theorem
Now let's step back to the primitive matrices part of the Perron-Frobenius theorem

```{prf:Theorem} Continous of Perron-Frobenius Theorem
:label: con-perron-frobenius
Expand All @@ -259,7 +267,7 @@ If $A$ is primitive then,
$ r(A)^{-m} A^m$ converges to $v w^{\top}$ when $m \rightarrow \infty$. The matrix $v w^{\top}$ is called the **Perron projection** of $A$.
```

#### Example 1: Primitive matrix
#### Example 1: primitive matrix

Consider the following primitive matrix $B$:

Expand All @@ -277,7 +285,7 @@ We compute the dominant eigenvalue and the corresponding eigenvector
eig(B)
```

Now let's give some examples to see if the claims of the Perron-Frobenius Theorem hold for the primitive matrix $B$:
Now let's give some examples to see if the claims of the Perron-Frobenius theorem hold for the primitive matrix $B$:

1. The dominant eigenvalue is real-valued and non-negative.
2. All other eigenvalues have absolute values strictly less than the dominant eigenvalue.
Expand Down Expand Up @@ -373,18 +381,18 @@ check_convergence(B)

The result shows that the matrix is not primitive as it is not everywhere positive.

These examples show how the Perron-Frobenius Theorem relates to the eigenvalues and eigenvectors of positive matrices and the convergence of the power of matrices.
These examples show how the Perron-Frobenius theorem relates to the eigenvalues and eigenvectors of positive matrices and the convergence of the power of matrices.

In fact we have already seen the theorem in action before in {ref}`the Markov chain lecture <mc1_ex_1>`.

(spec_markov)=
#### Example 2: Connection to Markov chains
#### Example 2: connection to Markov chains

We are now prepared to bridge the languages spoken in the two lectures.

A primitive matrix is both irreducible and aperiodic.

So Perron-Frobenius Theorem explains why both {ref}`Imam and Temple matrix <mc_eg3>` and [Hamilton matrix](https://en.wikipedia.org/wiki/Hamiltonian_matrix) converge to a stationary distribution, which is the Perron projection of the two matrices
So Perron-Frobenius theorem explains why both {ref}`Imam and Temple matrix <mc_eg3>` and [Hamilton matrix](https://en.wikipedia.org/wiki/Hamiltonian_matrix) converge to a stationary distribution, which is the Perron projection of the two matrices

```{code-cell} ipython3
P = np.array([[0.68, 0.12, 0.20],
Expand Down Expand Up @@ -449,7 +457,7 @@ As we have seen, the largest eigenvalue for a primitive stochastic matrix is one
This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem),
but it is out of the scope of this lecture.

So by the statement (6) of Perron-Frobenius Theorem, $\lambda_i<1$ for all $i<n$, and $\lambda_n=1$ when $P$ is primitive.
So by the statement (6) of Perron-Frobenius theorem, $\lambda_i<1$ for all $i<n$, and $\lambda_n=1$ when $P$ is primitive.

Hence, after taking the Euclidean norm deviation, we obtain

Expand Down
Loading