diff --git a/docs/source/measures/long_run_mes.rst b/docs/source/measures/long_run_mes.rst index 2cda4671..496739a3 100644 --- a/docs/source/measures/long_run_mes.rst +++ b/docs/source/measures/long_run_mes.rst @@ -92,9 +92,10 @@ where, volatility at time :math:`t`. - :math:`z_{it}, z_{mt}` follows an unknown bivariate distribution with - zero mean and some covariance structure. + zero mean and maybe some covariance structure. -However, generally we can assume :math:`z_{it}, z_{mt}\sim N(0,1)`. +However, generally we can assume :math:`z_{it}` and :math:`z_{mt}` are i.i.d. +standard normal .. important:: @@ -144,6 +145,14 @@ Specifically, the GJR-GARCH models the conditional variances as: where :math:`I^-_{it} = 1` if :math:`r_{it} < 0` and :math:`I^-_{mt} = 1` if :math:`r_{mt} < 0`. +.. note:: + + `Brownlees and Engle (2017) `_ use a + zero-mean assumption, so that :math:`r` is used in the above equation + instead of the residual :math:`\epsilon`. They use :math:`\epsilon` to denote + the standardized residual, which in this note is :math:`z` to be consistent + with other notes of ``frds``. + Dynamic correlation =================== @@ -175,8 +184,8 @@ the time-varying :math:`\mathbf{R}_t` via a proxy process :label: eq:dcc_corr \mathbf{R}_t = \text{Corr}\begin{pmatrix} - \epsilon_{it} \\\\ - \epsilon_{mt} + z_{it} \\\\ + z_{mt} \end{pmatrix} = \begin{bmatrix} 1 & \rho_{it} \\\\ @@ -217,7 +226,7 @@ The Q process is updated as follows: .. math:: :label: q11 - q_{it} = (1 - a - b) \overline{q}_{i} + a z^2_{1,t-1} + b q_{i,t-1} + q_{it} = (1 - a - b) \overline{q}_{i} + a z^2_{i,t-1} + b q_{i,t-1} .. math:: :label: q22 @@ -234,7 +243,7 @@ The dynamic conditional correlation :math:`\rho_t` is given by: .. math:: :label: rho_t - \rho_{it} = \frac{q_{imt}}{\sqrt{q_{i} q_{mt}}} + \rho_{it} = \frac{q_{imt}}{\sqrt{q_{it} q_{mt}}} Estimating GJR-GARCH-DCC ======================== @@ -319,11 +328,13 @@ assumptions. .. note:: - Basically, we need innovations :math:`z_{it}` and :math:`z_{mt}` in order to forecast. - To do so, after estimating the GJR-GARCH-DCC model, we use the standardized residuals - from the market time series and the estimated conditinal correlations to compute the "innovation" - to the firm series *such that* its standarized residuals conforms the conditional correlations. + Basically, we need innovations :math:`z_{it}` and :math:`z_{mt}` for :math:`t>T` in order to forecast. + For the market returns' :math:`z_{mt}` in the prediction horizon, we sample the standardized residuals + from the past with replacement. For the firm return innovations :math:`z_{it}` in the prediction horizon, + we use the computed :math:`\xi_{it}`, which is *independent* from :math:`z_{mt}`. This guarantees that + the market and firm return innovations are i.i.d. standard normal, while the return covariances conform + to the DCC model. **Step 3**. Use the pseudo sample of innovations as inputs of the DCC and (GJR)GARCH filters, respectively. Initial conditions are the last values @@ -341,6 +352,44 @@ up to time :math:`T`, that is r^s_{mT+t} \end{bmatrix}_{t=1,...,h} | \mathcal{F}_{T} +Specifically, in each simulation :math:`s`, for the :math:`h`-step-ahead prediction, +compute :math:`\hat{\sigma}^2_{iT+h}` and :math:`\hat{\sigma}^2_{mT+h}`, + +.. math:: + + \begin{align} + \hat{\sigma}^2_{iT+h} &= \omega_i + \left[\alpha_i+\gamma_i I(\xi_{ih}<0)\right] \xi_{ih}^2 + \beta_i \hat{\sigma}^2_{iT+h-1} \\\\ + \hat{\sigma}^2_{mT+h} &= \omega_m + \left[\alpha_m+\gamma_m I(z_{mh}<0)\right] z_{mh}^2 + \beta_m \hat{\sigma}^2_{mT+h-1} + \end{align} + +Then, update DCC coefficients, + +.. math:: + + \begin{align} + \hat{q}_{iT+h} &= (1 - a - b) \overline{q}_{i} + a \xi^2_{i,h-1} + b \hat{q}_{i,T+h-1} \\\\ + \hat{q}_{mT+h} &= (1 - a - b) \overline{q}_{m} + a z^2_{m,h-1} + b \hat{q}_{m,T+h-1} \\\\ + \hat{q}_{imT+h} &= (1 - a - b) \overline{q}_{im} + a \xi_{i,h-1} z_{m,h-1} + b \hat{q}_{im,T+h-1} + \end{align} + +The dynamic conditional correlation :math:`\hat{\rho}_{iT+1}` is given by: + +.. math:: + + \hat{\rho}_{iT+h} = \frac{\hat{q}_{imT+h}}{\sqrt{\hat{q}_{iT+h} \hat{q}_{mT+h}}} + +This conditional correlation :math:`\hat{\rho}_{iT+1}` is then used to compute the 1-step-ahead returns +given the conditional variances and innovations, + +.. math:: + + \begin{align} + \hat{r}_{iT+h} &= \mu_i + \hat{\rho}_{iT+h} \xi_{ih} \\\\ + \hat{r}_{mT+h} &= \mu_m + \hat{\rho}_{iT+h} z_{mh} + \end{align} + +This process is performed for :math:`T+2,\dots,T+h` forecasts, so tha we have in this simulation +:math:`s` a set of market and firm (log) returns, :math:`r^s_{iT+t}` and :math:`r^s_{mT+t}`, :math:`t=1,\dots,h`. **Step 4**. Construct the multiperiod arithmetic firm (market) return of each pseudo sample, @@ -385,6 +434,30 @@ multiperiod arithmetic returns conditional on the systemic event, .. autoclass:: frds.measures.LRMES +.. autoclass:: frds.measures.LongRunMarginalExpectedShortfall + ********** Examples -********** \ No newline at end of file +********** + +Import built-in dataset of stock returns. + +>>> from frds.datasets import StockReturns +>>> returns = StockReturns.stocks_us +>>> returns.head() + GOOGL GS JPM ^GSPC +Date +2010-01-05 -0.004404 0.017680 0.019370 0.003116 +2010-01-06 -0.025209 -0.010673 0.005494 0.000546 +2010-01-07 -0.023280 0.019569 0.019809 0.004001 +2010-01-08 0.013331 -0.018912 -0.002456 0.002882 +2010-01-11 -0.001512 -0.015776 -0.003357 0.001747 + +Estimate LRMES using the last 600 days' observtions. + +>>> from frds.measures import LRMES +>>> gs = returns["GS"].to_numpy()[-600:] +>>> sp500 = returns["^GSPC"].to_numpy()[-600:] +>>> lrmes = LRMES(gs, sp500) +>>> lrmes.estimate(S=10000, h=22, C=-0.1) +-0.02442517334543748 diff --git a/setup.py b/setup.py index 58ac2e12..189f1537 100644 --- a/setup.py +++ b/setup.py @@ -21,17 +21,17 @@ language="c++", ) -# mod_dcc = Extension( -# "frds.algorithms.dcc.dcc_ext", -# include_dirs=[get_path("platinclude"), numpy.get_include()], -# sources=["src/frds/algorithms/dcc/dcc_module.cpp"], -# extra_compile_args=extra_compile_args, -# language="c++", -# ) +mod_algo_utils = Extension( + "frds.algorithms.utils.utils_ext", + include_dirs=[get_path("platinclude"), numpy.get_include()], + sources=["src/frds/algorithms/utils/utils_module.cpp"], + extra_compile_args=extra_compile_args, + language="c++", +) ext_modules = [ mod_isolation_forest, - # mod_dcc, + mod_algo_utils, ] setup( diff --git a/src/frds/algorithms/_garch.py b/src/frds/algorithms/_garch.py index df5019c1..60c83699 100644 --- a/src/frds/algorithms/_garch.py +++ b/src/frds/algorithms/_garch.py @@ -5,6 +5,12 @@ import numpy as np from scipy.optimize import minimize, OptimizeResult +USE_CPP_EXTENSION = True +try: + import frds.algorithms.utils.utils_ext as ext +except ImportError: + USE_CPP_EXTENSION = False + class GARCHModel: """:doc:`/algorithms/garch` model with constant mean and Normal noise @@ -162,6 +168,8 @@ def compute_variance( Returns: np.ndarray: conditional variance """ + if USE_CPP_EXTENSION: + return ext.compute_garch_variance(params, resids, backcast, var_bounds) omega, alpha, beta = params sigma2 = np.zeros_like(resids) sigma2[0] = omega + (alpha + beta) * backcast @@ -252,6 +260,8 @@ def bounds_check(sigma2: float, var_bounds: np.ndarray) -> float: Returns: float: adjusted conditional variance """ + if USE_CPP_EXTENSION: + return ext.bounds_check(sigma2, var_bounds) lower, upper = var_bounds[0], var_bounds[1] sigma2 = max(lower, sigma2) if sigma2 > upper: @@ -292,6 +302,8 @@ def ewma(resids: np.ndarray, initial_value: float, lam=0.94) -> np.ndarray: Returns: np.ndarray: Array containing the conditional variance estimates. """ + if USE_CPP_EXTENSION: + return ext.ewma(resids, initial_value, lam) T = len(resids) variance = np.empty(T) variance[0] = initial_value # Set the initial value @@ -384,6 +396,8 @@ def compute_variance( Returns: np.ndarray: conditional variance """ + if USE_CPP_EXTENSION: + return ext.compute_gjrgarch_variance(params, resids, backcast, var_bounds) # fmt: off omega, alpha, gamma, beta = params sigma2 = np.zeros_like(resids) @@ -394,6 +408,31 @@ def compute_variance( sigma2[t] = GJRGARCHModel.bounds_check(sigma2[t], var_bounds[t]) return sigma2 + @staticmethod + def forecast_variance( + params: Parameters, + resids: np.ndarray, + initial_variance: float, + ) -> np.ndarray: + """Forecast the variances conditional on given parameters and residuals. + + Args: + params (Parameters): :class:`frds.algorithms.GJRGARCHModel.Parameters` + resids (np.ndarray): residuals to use + initial_variance (float): starting value of variance forecasts + + Returns: + np.ndarray: conditional variance + """ + # fmt: off + omega, alpha, gamma, beta = params.omega, params.alpha, params.gamma, params.beta + sigma2 = np.zeros_like(resids) + sigma2[0] = initial_variance + for t in range(1, len(resids)): + sigma2[t] = omega + alpha * (resids[t - 1] ** 2) + beta * sigma2[t - 1] + sigma2[t] += gamma * resids[t - 1] ** 2 if resids[t - 1] < 0 else 0 + return sigma2[1:] + def starting_values(self, resids: np.ndarray) -> List[float]: """Finds the optimal initial values for the volatility model via a grid search. For varying target persistence and alpha values, return the diff --git a/src/frds/algorithms/_mgarch.py b/src/frds/algorithms/_mgarch.py index 6f9feba1..957185fe 100644 --- a/src/frds/algorithms/_mgarch.py +++ b/src/frds/algorithms/_mgarch.py @@ -320,11 +320,44 @@ def loglikelihood_model(self, params: np.ndarray) -> float: resids2 = self.model2.resids sigma2_1 = self.model1.sigma2 sigma2_2 = self.model2.sigma2 + # z1 and z2 are standardized residuals + z1 = resids1 / np.sqrt(sigma2_1) + z2 = resids2 / np.sqrt(sigma2_2) # The loglikelihood of the variance component (Step 1) l1 = self.model1.parameters.loglikelihood + self.model2.parameters.loglikelihood # The loglikelihood of the correlation component (Step 2) + rho = self.conditional_correlations(a, b) + + log_likelihood_terms = -0.5 * ( + - (z1**2 + z2**2) + + np.log(1 - rho ** 2) + + (z1 ** 2 + z2 ** 2 - 2 * rho * z1 * z2) / (1 - rho ** 2) + ) + l2 = np.sum(log_likelihood_terms) + + negative_loglikelihood = - (l1 + l2) + return negative_loglikelihood + + def conditional_correlations(self, a: float, b: float) -> np.ndarray: + """Computes the conditional correlations based on given a and b. + Other parameters are + + Args: + a (float): DCC parameter + b (float): DCC parameter + + Returns: + np.ndarray: array of conditional correlations + """ + self.model1.fit() # in case it was not estimated, no performance loss + self.model2.fit() # in case it was not estimated + resids1 = self.model1.resids + resids2 = self.model2.resids + sigma2_1 = self.model1.sigma2 + sigma2_2 = self.model2.sigma2 + # z1 and z2 are standardized residuals z1 = resids1 / np.sqrt(sigma2_1) z2 = resids2 / np.sqrt(sigma2_2) @@ -335,28 +368,18 @@ def loglikelihood_model(self, params: np.ndarray) -> float: q12 = np.empty_like(z1) q22 = np.empty_like(z1) rho = np.zeros_like(z1) - # TODO: initial values for Q q11[0] = q_11_bar q22[0] = q_22_bar q12[0] = q_12_bar rho[0] = q12[0] / np.sqrt(q11[0] * q22[0]) - # TODO: fix RuntimeWarning about invalid values for t in range(1, T): q11[t] = (1 - a - b) * q_11_bar + a * z1[t - 1] ** 2 + b * q11[t - 1] q22[t] = (1 - a - b) * q_22_bar + a * z2[t - 1] ** 2 + b * q22[t - 1] q12[t] = (1 - a - b) * q_12_bar + a * z1[t - 1] * z2[t - 1] + b * q12[t - 1] rho[t] = q12[t] / np.sqrt(q11[t] * q22[t]) - log_likelihood_terms = -0.5 * ( - - (z1**2 + z2**2) - + np.log(1 - rho ** 2) - + (z1 ** 2 + z2 ** 2 - 2 * rho * z1 * z2) / (1 - rho ** 2) - ) - l2 = np.sum(log_likelihood_terms) - - negative_loglikelihood = - (l1 + l2) - return negative_loglikelihood + return rho class GJRGARCHModel_DCC(GARCHModel_DCC): diff --git a/src/frds/algorithms/dcc/__init__.py b/src/frds/algorithms/dcc/__init__.py deleted file mode 100644 index f2da4f60..00000000 --- a/src/frds/algorithms/dcc/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .dcc_py import dcc, calc_Q, calc_Q_avg, calc_R diff --git a/src/frds/algorithms/dcc/dcc.hpp b/src/frds/algorithms/dcc/dcc.hpp deleted file mode 100644 index 92e93cc2..00000000 --- a/src/frds/algorithms/dcc/dcc.hpp +++ /dev/null @@ -1,189 +0,0 @@ -#ifndef FRDS_ALGO_DCC_H -#define FRDS_ALGO_DCC_H - -#include -#include -#include -#include -#include -#include -#include - -struct Point { - double x; - double y; -}; - -inline void calc_Q_avg(double Q_avg[], double firm[], double mkt[], uint T) { - for (int i = 0; i < 4; i++) - Q_avg[i] = 0; - for (size_t i = 0; i < T; i++) { - auto fret = firm[i]; - auto mret = mkt[i]; - Q_avg[0] += fret*fret; Q_avg[1] += fret*mret; - Q_avg[2] += mret*fret; Q_avg[3] += mret*mret; - } - for (int i = 0; i < 4; i++) - Q_avg[i] /= T; -} - -inline void calc_Q(double Qs[], double firm[], double mkt[], uint T, double a, double b) { - double Q_avg[4]; - calc_Q_avg(Q_avg, firm, mkt, T); - for (int i = 0; i < 4; i++) - Qs[i] = Q_avg[i]; - - double omega[4]; - for (int i = 0; i < 4; i++) - omega[i] = (1.0 - a - b) * Q_avg[i]; - - for (uint i = 1; i < T; i++) { - double Qt_1[4] = {Qs[i*4], Qs[i*4+1], Qs[i+4+2], Qs[i*4+3]}; - double et_1_outer[4] = {firm[i]*firm[i], firm[i]*mkt[i], mkt[i]*firm[i], mkt[i]*mkt[i]}; - for (int j = 0; j < 4; j++) - Qs[(i+0)*4+j] = omega[j] + a * et_1_outer[j] + b * Qt_1[j]; - } -} - -inline void calc_R(double Rs[], double firm[], double mkt[], uint T, double a, double b) { - double Q[4 * (T+0)]; - calc_Q(Q, firm, mkt, T, a, b); - for (uint i = 0; i < T; i++) - { - double q00 = Q[i*4], q01 = Q[i*4+1]; - double q10 = Q[i*4+2], q11 = Q[i*4+3]; - - double t00 = 1.0 / sqrt(abs(q00)), - t11 = 1.0 / sqrt(abs(q11)); - - // np.dot(np.dot(tmp, q), tmp) - double r00 = t00*q00*t00, r01 = t00*q01*t11; - double r10 = t11*q10*t00, r11 = t11*q11*t11; - - if (abs(r01) >= 1.) { - r01 = 0.9999 * (r01 > 0 ? 1 : -1); - r10 = r01; - } - - Rs[i*4] = r00; Rs[i*4+1] = r01; - Rs[i*4+2] = r10; Rs[i*4+3] = r11; - } - -} - -double loss_func(double firm[], double mkt[], uint T, double a, double b) { - // if (a<0 | b<0 | a>1 | b>1 | a+b>1) return std::numeric_limits::max(); - double R[4 * (T+0)]; // 2*2 matrix * T - calc_R(R, firm, mkt, T, a, b); - double loss = 0.0; - for (uint i = 0; i < T; i++) - { - // Rt - double r00 = R[i*4], r01 = R[i*4+1]; - double r10 = R[i*4+2], r11 = R[i*4+3]; - // determinant of Rt - double det = r00*r11 - r01*r10; - if (det <= 0) return std::numeric_limits::max(); - // inverse of Rt - double inv00 = r11 / det, inv01 = -r01 / det; - double inv10 = -r10 / det, inv11 = r00 / det; - // et - double e00 = firm[i], e01 = mkt[i]; - loss += log(det) + (e00*inv00+e01*inv10)*e00 + (e00*inv01+e01*inv11)*e01; - } - std::cout << "loss " << loss <& func, Point initialPoint, double tolerance = 1e-6f, int maxIterations = 1000) { - const double alpha = 1.0f; - const double beta = 0.2f; - const double gamma = 1.1f; - - std::vector simplex{ - {initialPoint.x, initialPoint.y}, - {1.0f, 0.0f}, - {0.0f, 1.0f}, - }; - - for (int iteration = 0; iteration < maxIterations; ++iteration) { - // Sort the simplex by function values - std::sort(simplex.begin(), simplex.end(), [&](const Point& p1, const Point& p2) { - return func(p1.x, p1.y) < func(p2.x, p2.y); - }); - - // Calculate the centroid of the best (n-1) points - Point centroid{0.0f, 0.0f}; - for (int i = 0; i < simplex.size() - 1; ++i) { - centroid.x += simplex[i].x; - centroid.y += simplex[i].y; - } - centroid.x /= (simplex.size() - 1); - centroid.y /= (simplex.size() - 1); - - // Reflection - Point reflectedPoint{ - centroid.x + alpha * (centroid.x - simplex.back().x), - centroid.y + alpha * (centroid.y - simplex.back().y) - }; - - if (func(reflectedPoint.x, reflectedPoint.y) < func(simplex[0].x, simplex[0].y)) { - // Expansion - Point expandedPoint{ - centroid.x + gamma * (reflectedPoint.x - centroid.x), - centroid.y + gamma * (reflectedPoint.y - centroid.y) - }; - - if (func(expandedPoint.x, expandedPoint.y) < func(reflectedPoint.x, reflectedPoint.y)) { - simplex.back() = expandedPoint; - } else { - simplex.back() = reflectedPoint; - } - } else if (func(reflectedPoint.x, reflectedPoint.y) < func(simplex[simplex.size() - 2].x, simplex[simplex.size() - 2].y)) { - simplex.back() = reflectedPoint; - } else { - // Contraction - Point contractedPoint{ - centroid.x + beta * (simplex.back().x - centroid.x), - centroid.y + beta * (simplex.back().y - centroid.y) - }; - - if (func(contractedPoint.x, contractedPoint.y) < func(simplex.back().x, simplex.back().y)) { - simplex.back() = contractedPoint; - } else { - // Shrink - for (int i = 1; i < simplex.size(); ++i) { - simplex[i].x = simplex[0].x + 0.5f * (simplex[i].x - simplex[0].x); - simplex[i].y = simplex[0].y + 0.5f * (simplex[i].y - simplex[0].y); - } - } - } - - // Check for convergence - const double maxError = std::max(std::abs(func(simplex[0].x, simplex[0].y) - func(simplex.back().x, simplex.back().y)), std::max(std::abs(simplex[0].x - simplex.back().x), std::abs(simplex[0].y - simplex.back().y))); - if (maxError < tolerance) - break; - } - - return simplex[0]; -} - - -std::tuple dcc(double firm[], double mkt[], uint T) { - double a = 0.5, b = 0.5; - Point initialPoint{a, b}; // Initial guess for x and y - - auto func = [&](double a, double b) -> double { - return loss_func(firm, mkt, T, a, b); - }; - - Point minimum = NelderMeadMinimize(func, initialPoint, .00000001f, 1000); - - // std::cout << "Optimized parameters: x = " << minimum.x << ", y = " << minimum.y << std::endl; - // std::cout << "Minimum function value: " << loss_func(minimum.x, minimum.y) << std::endl; - - return std::make_tuple(minimum.x, minimum.y); -} - -#endif // FRDS_ALGO_DCC_H \ No newline at end of file diff --git a/src/frds/algorithms/dcc/dcc_module.cpp b/src/frds/algorithms/dcc/dcc_module.cpp deleted file mode 100644 index 8bfa476f..00000000 --- a/src/frds/algorithms/dcc/dcc_module.cpp +++ /dev/null @@ -1,100 +0,0 @@ -#define PY_SSIZE_T_CLEAN -#include -#ifndef DCC_MODULE -#define DCC_MODUEL -#endif -#include "dcc.hpp" - -#ifndef NPY_NO_DEPRECATED_API -#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION -#endif - -#include "numpy/arrayobject.h" - -static PyObject *dcc_wrapper(PyObject *self, PyObject *args) { - PyObject *firm_data = NULL, *mkt_data = NULL; - PyArrayObject *firm_arr = NULL, *mkt_arr = NULL, *outArr = NULL; - if (!PyArg_ParseTuple(args, "OO", &firm_data, &mkt_data)) - return NULL; - - firm_arr = (PyArrayObject *)PyArray_FROM_OTF(firm_data, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); - if (firm_arr == NULL) return NULL; - mkt_arr = (PyArrayObject *)PyArray_FROM_OTF(mkt_data, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); - if (mkt_arr == NULL) return NULL; - - uint T = PyArray_DIM(firm_arr, 1); - - double firm[T], mkt[T]; - for (size_t i = 0; i < T; i++) - { - firm[i] = *(double *)PyArray_GETPTR1(firm_arr, i); - mkt[i] = *(double *)PyArray_GETPTR1(mkt_arr, i); - } - - std::tuple ab = dcc(firm, mkt, T); - - Py_DECREF(firm_arr); - Py_DECREF(mkt_arr); - - return PyTuple_Pack(2, PyFloat_FromDouble(std::get<0>(ab)), PyFloat_FromDouble(std::get<1>(ab))); -}; - - -static PyObject *loss_func_wrapper(PyObject *self, PyObject *args) { - PyObject *firm_data = NULL, *mkt_data = NULL; - double a, b; - PyArrayObject *firm_arr = NULL, *mkt_arr = NULL, *outArr = NULL; - if (!PyArg_ParseTuple(args, "OOdd", &firm_data, &mkt_data, &a, &b)) - return NULL; - - firm_arr = (PyArrayObject *)PyArray_FROM_OTF(firm_data, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); - if (firm_arr == NULL) return NULL; - mkt_arr = (PyArrayObject *)PyArray_FROM_OTF(mkt_data, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); - if (mkt_arr == NULL) return NULL; - - uint T = PyArray_DIM(firm_arr, 1); - - double firm[T], mkt[T]; - for (size_t i = 0; i < T; i++) - { - firm[i] = *(double *)PyArray_GETPTR1(firm_arr, i); - mkt[i] = *(double *)PyArray_GETPTR1(mkt_arr, i); - } - - double loss = loss_func(firm, mkt, T, a, b); - - Py_DECREF(firm_arr); - Py_DECREF(mkt_arr); - - return PyFloat_FromDouble(loss); -}; - - - -static PyMethodDef dcc_methods[] = { - { - "dcc", // method name - dcc_wrapper, // wrapper function - METH_VARARGS, // varargs flag - "Dynamic Conditional Correlation" // docstring - }, - { - "loss_func", // method name - loss_func_wrapper, // wrapper function - METH_VARARGS, // varargs flag - "Loss function" // docstring - }, - {NULL, NULL, 0, NULL}}; - - -// module definition structure for python3 -static struct PyModuleDef dccModule = {PyModuleDef_HEAD_INIT, "dcc", - "DCC estimation", -1, - dcc_methods}; - -// module initializer for python3 -PyMODINIT_FUNC PyInit_dcc_ext() { - import_array(); - return PyModule_Create(&dccModule); -} - diff --git a/src/frds/algorithms/dcc/dcc_py.py b/src/frds/algorithms/dcc/dcc_py.py deleted file mode 100644 index 8cff11ab..00000000 --- a/src/frds/algorithms/dcc/dcc_py.py +++ /dev/null @@ -1,127 +0,0 @@ -from typing import Tuple, List -import numpy as np -from scipy.optimize import minimize - - -def calc_Q_avg(data: np.ndarray) -> np.ndarray: - """Compute the unconditional correlation matrix - - Args: - data (np.ndarray): (2,T) array of firm and market volatility-adjusted returns - - Returns: - np.ndarray: (2,2) array of unconditional correlation - """ - _, T = data.shape - Q_avg = np.zeros([2, 2]) - for t in range(T): - e = data[:, t] - Q_avg += np.outer(e, e) - Q_avg /= T - return Q_avg - - -def calc_Q(data: np.ndarray, a: float, b: float) -> List[np.ndarray]: - """Calculate list of Q, the quasi correlation matrix - - Args: - data (np.ndarray): (2,T) array of firm and market volatility-adjusted returns - a (float): parameter `a` in DCC model - b (float): parameter `b` in DCC model - - Returns: - list[np.ndarray]: list of the quasi correlation matrices - """ - _, T = data.shape - Q_avg = calc_Q_avg(data) - Qs = [Q_avg] - omega = (1.0 - a - b) * Q_avg - for t in range(T): - et_1 = data[:, t] - Qt_1 = Qs[-1] # Q(t-1) - # overflow error may occur - Qt = omega + a * np.outer(et_1, et_1) + b * Qt_1 - Qs.append(Qt) - return Qs - - -def calc_R(data: np.ndarray, a: float, b: float) -> List[np.ndarray]: - """Calculate list of R, the conditional correlation matrix - - Args: - data (np.ndarray): (2,T) array of firm and market volatility-adjusted returns - a (float): parameter `a` in DCC model - b (float): parameter `b` in DCC model - - Returns: - list[np.ndarray]: list of the conditional correlation matrices - """ - Qs = calc_Q(data, a, b) - Rs = [] - for q in Qs: - tmp = 1.0 / np.sqrt(np.abs(q)) - tmp = tmp * np.eye(2) - R = np.dot(np.dot(tmp, q), tmp) - if abs(R[0, 1]) >= 1: - R[0, 1] = 0.9999 * (1 if R[0, 1] >= 0 else -1) - R[1, 0] = R[0, 1] - Rs.append(R) - return Rs - - -def dcc(data: np.ndarray) -> Tuple[float, float]: - """Estimate DCC - - Args: - data (np.ndarray): (2,T) array of firm and market volatility-adjusted returns - - Returns: - Tuple[float, float]: (a, b) for the DCC model - """ - _, T = data.shape - - _calc_R = lambda a, b: calc_R(data, a, b) - - def loss_func(ab: np.ndarray) -> float: - """Negative loglikelihood as a function of (a,b) - - Args: - ab (np.ndarray): (2,) array of (a, b) - - Returns: - float: negative log likelihood given (a,b) - """ - a, b = ab[0], ab[1] - if a < 0 or b < 0 or a > 1 or b > 1: - return np.inf - - R = _calc_R(a, b) - - loss = 0.0 - for t in range(T): - Rt = R[t] - Rt_ = np.linalg.inv(Rt) - et = data[:, t] - det = np.linalg.det(Rt) - # certain combination of (a,b) may lead to incorrect Rt - if det <= 0: - return np.inf - loss += np.log(det) + np.dot(np.dot(et, Rt_), et) - return loss - - # Solve for a, b - res = minimize( - loss_func, # -ve log likelihood as a function of (a,b) - [0.5, 0.5], # initial values for a, b - method="SLSQP", # allow for constraints below - constraints=[ - {"type": "ineq", "fun": lambda x: 1.0 - x[0] - x[1]}, # a+b<1 - {"type": "ineq", "fun": lambda x: x[0]}, # a>0 - {"type": "ineq", "fun": lambda x: x[1]}, # b>0 - ], - options={"disp": False}, - ) - - a, b = res.x[:2] # a, b - - return a, b diff --git a/src/frds/algorithms/utils/__init__.py b/src/frds/algorithms/utils/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/src/frds/algorithms/utils/garch.hpp b/src/frds/algorithms/utils/garch.hpp new file mode 100644 index 00000000..ae65ba6e --- /dev/null +++ b/src/frds/algorithms/utils/garch.hpp @@ -0,0 +1,71 @@ +#ifndef FRDS_ALGO_UTILS_GARCH +#define FRDS_ALGO_UTILS_GARCH + +#include +#include + +void ewma(const double resids[], double variance[], int T, double initial_value, + double lam = 0.94) { + // Compute the conditional variance estimates using Exponentially Weighted + // Moving Average (EWMA). + + variance[0] = initial_value; // Set the initial value + + // Compute the EWMA using the decay factors and squared residuals + for (int t = 1; t < T; ++t) { + double squared_resid = resids[t - 1] * resids[t - 1]; + variance[t] = lam * variance[t - 1] + (1 - lam) * squared_resid; + } +} + +inline double bounds_check(double sigma2, double lower, double upper) { + sigma2 = lower > sigma2 ? lower : sigma2; + if (sigma2 > upper) { + if (!std::isinf(sigma2)) { + sigma2 = upper + std::log(sigma2 / upper); + } else { + sigma2 = upper + 1000; + } + } + return sigma2; +} + +void compute_garch_variance(double *sigma2, double *resids, double *var_bounds, + int T, std::vector params, + double backcast) { + double omega = params[0]; + double alpha = params[1]; + double beta = params[2]; + + sigma2[0] = omega + (alpha + beta) * backcast; + + for (int t = 1; t < T; ++t) { + sigma2[t] = + omega + alpha * std::pow(resids[t - 1], 2) + beta * sigma2[t - 1]; + sigma2[t] = + bounds_check(sigma2[t], var_bounds[t * 2], var_bounds[t * 2 + 1]); + } +} + +void compute_gjrgarch_variance(double *sigma2, double *resids, + double *var_bounds, int T, + std::vector params, double backcast) { + double omega = params[0]; + double alpha = params[1]; + double gamma = params[2]; + double beta = params[3]; + + sigma2[0] = omega + (alpha + gamma / 2 + beta) * backcast; + + for (int t = 1; t < T; ++t) { + sigma2[t] = + omega + alpha * std::pow(resids[t - 1], 2) + beta * sigma2[t - 1]; + if (resids[t - 1] < 0) { + sigma2[t] += gamma * std::pow(resids[t - 1], 2); + } + sigma2[t] = + bounds_check(sigma2[t], var_bounds[t * 2], var_bounds[t * 2 + 1]); + } +} + +#endif // FRDS_ALGO_UTILS_GARCH diff --git a/src/frds/algorithms/utils/utils_module.cpp b/src/frds/algorithms/utils/utils_module.cpp new file mode 100644 index 00000000..af87523f --- /dev/null +++ b/src/frds/algorithms/utils/utils_module.cpp @@ -0,0 +1,180 @@ + +#define PY_SSIZE_T_CLEAN +#include +#ifndef UTILS_MODULE +#define UTILS_MODULE +#endif +#include "garch.hpp" + +#ifndef NPY_NO_DEPRECATED_API +#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION +#endif + +#include "numpy/arrayobject.h" + +// The wrapper function to be called from Python +static PyObject *ewma_wrapper(PyObject *self, PyObject *args) { + PyArrayObject *resids_in; + double initial_value, lam; + + // Parse the input tuple + if (!PyArg_ParseTuple(args, "O!dd", &PyArray_Type, &resids_in, &initial_value, + &lam)) { + return NULL; + } + + // Construct the output array + int T = PyArray_DIM(resids_in, 0); + npy_intp dims[1] = {T}; + PyObject *variance_out = PyArray_SimpleNew(1, dims, NPY_DOUBLE); + if (variance_out == NULL) { + return NULL; + } + + // Call the core EWMA function + ewma((double *)PyArray_DATA(resids_in), + (double *)PyArray_DATA((PyArrayObject *)variance_out), T, initial_value, + lam); + + return variance_out; +} + +// The wrapper function to be called from Python +static PyObject *bounds_check_wrapper(PyObject *self, PyObject *args) { + double sigma2; + PyArrayObject *var_bounds_in; + + // Parse the input tuple + if (!PyArg_ParseTuple(args, "dO!", &sigma2, &PyArray_Type, &var_bounds_in)) { + return NULL; + } + + // Extract lower and upper bounds + double *bounds_ptr = (double *)PyArray_DATA(var_bounds_in); + double lower = bounds_ptr[0]; + double upper = bounds_ptr[1]; + + // Call the core bounds_check function + double result = bounds_check(sigma2, lower, upper); + + return PyFloat_FromDouble(result); +} + +// The wrapper function to be called from Python +static PyObject *compute_garch_variance_wrapper(PyObject *self, + PyObject *args) { + PyObject *params_obj; + PyArrayObject *resids_in; + double backcast; + PyArrayObject *var_bounds_in; + + // Parse the input tuple + if (!PyArg_ParseTuple(args, "OO!dO!", ¶ms_obj, &PyArray_Type, &resids_in, + &backcast, &PyArray_Type, &var_bounds_in)) { + return NULL; + } + + // Check dimensions + if (PyArray_NDIM(resids_in) != 1 || PyArray_NDIM(var_bounds_in) != 2) { + PyErr_SetString(PyExc_ValueError, "Invalid array dimensions"); + return NULL; + } + + // Extract parameters + std::vector params; + PyObject *iter = PyObject_GetIter(params_obj); + PyObject *item; + while ((item = PyIter_Next(iter))) { + params.push_back(PyFloat_AsDouble(item)); + Py_DECREF(item); + } + Py_DECREF(iter); + + // Construct the output array + int T = PyArray_DIM(resids_in, 0); + npy_intp dims[1] = {T}; + PyObject *sigma2_out = PyArray_SimpleNew(1, dims, NPY_DOUBLE); + if (sigma2_out == NULL) { + return NULL; + } + + // Call the core compute_garch_variance function + compute_garch_variance((double *)PyArray_DATA((PyArrayObject *)sigma2_out), + (double *)PyArray_DATA(resids_in), + (double *)PyArray_DATA(var_bounds_in), T, params, + backcast); + + return sigma2_out; +} + +// The wrapper function to be called from Python +static PyObject *compute_gjrgarch_variance_wrapper(PyObject *self, + PyObject *args) { + PyObject *params_obj; + PyArrayObject *resids_in; + double backcast; + PyArrayObject *var_bounds_in; + + // Parse the input tuple + if (!PyArg_ParseTuple(args, "OO!dO!", ¶ms_obj, &PyArray_Type, &resids_in, + &backcast, &PyArray_Type, &var_bounds_in)) { + return NULL; + } + + // Check dimensions + if (PyArray_NDIM(resids_in) != 1 || PyArray_NDIM(var_bounds_in) != 2) { + PyErr_SetString(PyExc_ValueError, "Invalid array dimensions"); + return NULL; + } + + // Extract parameters + std::vector params; + PyObject *iter = PyObject_GetIter(params_obj); + PyObject *item; + while ((item = PyIter_Next(iter))) { + params.push_back(PyFloat_AsDouble(item)); + Py_DECREF(item); + } + Py_DECREF(iter); + + // Construct the output array + int T = PyArray_DIM(resids_in, 0); + npy_intp dims[1] = {T}; + PyObject *sigma2_out = PyArray_SimpleNew(1, dims, NPY_DOUBLE); + if (sigma2_out == NULL) { + return NULL; + } + + // Call the core compute_gjrgarch_variance function + compute_gjrgarch_variance((double *)PyArray_DATA((PyArrayObject *)sigma2_out), + (double *)PyArray_DATA(resids_in), + (double *)PyArray_DATA(var_bounds_in), T, params, + backcast); + + return sigma2_out; +} + +// Method table +static PyMethodDef methods[] = { + {"ewma", ewma_wrapper, METH_VARARGS, "Calculate EWMA"}, + + {"bounds_check", bounds_check_wrapper, METH_VARARGS, + "Adjust the conditional variance based on its bounds"}, + + {"compute_garch_variance", compute_garch_variance_wrapper, METH_VARARGS, + "Computes the variances conditional on given parameters"}, + + {"compute_gjrgarch_variance", compute_gjrgarch_variance_wrapper, + METH_VARARGS, "Computes the variances conditional on given parameters"}, + + {NULL, NULL, 0, NULL}}; + +// module definition structure for python3 +static struct PyModuleDef module = {PyModuleDef_HEAD_INIT, "utils", "utils", -1, + methods}; + +// module initializer for python3 +PyMODINIT_FUNC PyInit_utils_ext() { + import_array(); + return PyModule_Create(&module); +} diff --git a/src/frds/measures/__init__.py b/src/frds/measures/__init__.py index 232472b4..95f9c4b2 100644 --- a/src/frds/measures/__init__.py +++ b/src/frds/measures/__init__.py @@ -6,6 +6,7 @@ from ._absorption_ratio import AbsorptionRatio from ._contingent_claim_analysis import ContingentClaimAnalysis from ._distress_insurance_premium import DistressInsurancePremium +from ._long_run_mes import LongRunMarginalExpectedShortfall from ._marginal_expected_shortfall import MarginalExpectedShortfall from ._srisk import SRISK from ._systemic_expected_shortfall import SystemicExpectedShortfall @@ -13,11 +14,15 @@ from ._option_price import blsprice from ._z_score import z_score +LRMES = LongRunMarginalExpectedShortfall + __all__ = [ # classes "AbsorptionRatio", "ContingentClaimAnalysis", "DistressInsurancePremium", + "LongRunMarginalExpectedShortfall", + "LRMES", "MarginalExpectedShortfall", "SRISK", "SystemicExpectedShortfall", diff --git a/src/frds/measures/_long_run_mes.py b/src/frds/measures/_long_run_mes.py index bf684283..ebf34e82 100644 --- a/src/frds/measures/_long_run_mes.py +++ b/src/frds/measures/_long_run_mes.py @@ -1,171 +1,170 @@ +from typing import Tuple import numpy as np -from arch import arch_model -from frds.algorithms.dcc import dcc, calc_Q_avg, calc_Q, calc_R - - -def estimate( - firm_returns: np.ndarray, - market_returns: np.ndarray, - h=22, - S=10_000, - C=-0.1, - random_seed=42, -) -> np.ndarray: - """h-step-ahead LRMES forecasts conditional on a systemic event of market decline C - - Args: - firm_returns (np.ndarray): (n_days,n_firms) array of firm log returns. - market_returns (np.ndarray): (n_days,) array of market log returns. - h (int, optional): h-period-ahead prediction horizon. Defaults to 22. - S (int, optional): sample size used in simulation to generate LRMES forecasts. Defaults to 10000. - C (float, optional): market decline used to define systemic event. Defaults to -0.1, i.e. -10%. - random_seed (int, optional): random seed. Defaults to 42. - - Returns: - np.ndarray: (n_firms,) array of LRMES forecasts - - Examples: - >>> import frds.measures.long_run_mes as lrmes - >>> import yfinance as yf - >>> import numpy as np - >>> df = yf.download(tickers="SPY JPM GS", start="2017-01-01", end="2022-12-31", progress=False) - >>> df = df[['Adj Close']] - >>> df.columns = df.columns.droplevel(0) - >>> df - GS JPM SPY - Date - 2017-01-03 214.042953 72.665413 202.085266 - 2017-01-04 215.425156 72.799454 203.287506 - 2017-01-05 213.821426 72.129333 203.126007 - 2017-01-06 216.993469 72.137695 203.852783 - 2017-01-09 215.212524 72.187935 203.179840 - ... ... ... ... - 2022-12-23 343.053650 129.302628 381.454193 - 2022-12-27 339.538818 129.755707 379.949921 - 2022-12-28 338.446625 130.464844 375.227936 - 2022-12-29 340.988434 131.213409 381.982178 - 2022-12-30 340.938812 132.080154 380.975983 - [1510 rows x 3 columns] - >>> mkt_returns = np.log(df.SPY.pct_change()+1).dropna().to_numpy() - >>> firm_returns = np.array([np.log(df.JPM.pct_change()+1).dropna().to_numpy(), - ... np.log(df.GS.pct_change()+1).dropna().to_numpy()]).T - >>> lrmes.estimate(firm_returns, mkt_returns) - array([ 0.12958814, -0.09460028]) - - !!! note - `yfinance.download` may lead to different data at each run, so the results can vary. - However, given a fixed input dataset, the function will yield the same estimate conditional on the same random seed. - - Hence, the 22-day LRMES for JPM is 12.96% and -9.5% for GS. - The latter would be ignored in computing SRISK, though. - """ - if len(firm_returns.shape) == 1: - firm_returns = firm_returns.reshape((firm_returns.shape[0], 1)) - n_days, n_firms = firm_returns.shape - (_n_days,) = market_returns.shape - assert n_days == _n_days - - # Fit GJR-GARCH for the market returns - mkt_am = arch_model(market_returns, p=1, o=1, q=1, rescale=True) - mkt_res = mkt_am.fit(update_freq=0, disp=False) - epsilon_mkt = market_returns / mkt_res.conditional_volatility - # Forecasted volatility - mkt_vol_hat = np.sqrt( - np.squeeze( - mkt_res.forecast( - mkt_res.params, horizon=h, reindex=False - ).variance.T.to_numpy() - ) - ) # (h,1) array of volatility forecasts - - lrmes = np.zeros((n_firms,)) - for i in range(n_firms): - # Fit GJR-GARCH for each firm's returns - firm_am = arch_model(firm_returns[:, i], p=1, o=1, q=1, rescale=True) - firm_res = firm_am.fit(update_freq=0, disp=False) - epsilon_firm = firm_returns[:, i] / firm_res.conditional_volatility - # Forecasted volatility - firm_vol_hat = np.sqrt( - np.squeeze( - firm_res.forecast( - firm_res.params, horizon=h, reindex=False - ).variance.T.to_numpy() - ) - ) # (h,1) array of volatility forecasts - - # Estimate DCC for each firm-market pair - epsilon = np.array([epsilon_firm, epsilon_mkt]) - a, b = dcc(epsilon) # params in DCC model - Q_avg = calc_Q_avg(epsilon) - Q = calc_Q(epsilon, a, b) # Qt for training data - R = calc_R(epsilon, a, b) # Rt for training data - - # DCC predictions for correlations - et = epsilon[:, -1] - Q_Tplus1 = (1 - a - b) * Q_avg + a * np.outer(et, et) + b * Q[-1] - - diag_q = 1.0 / np.sqrt(np.abs(Q_Tplus1)) - diag_q = diag_q * np.eye(2) - R_Tplus1 = np.dot(np.dot(diag_q, Q_Tplus1), diag_q) - - diag_q = 1.0 / np.sqrt(np.abs(Q_avg)) - diag_q = diag_q * np.eye(2) - R_avg = np.dot(np.dot(diag_q, Q_avg), diag_q) - - Rhat = [] - for _h in range(1, h + 1): - _ab = np.power(a + b, _h - 1) - # R_Tplus_h is correlation matrix for T+h, symmetric 2*2 matrix - R_Tplus_h = (1 - _ab) * R_avg + _ab * R_Tplus1 - # In case predicted correlation is larger than 1 - if abs(R_Tplus_h[0, 1]) >= 1: - R_Tplus_h[0, 1] = 0.9999 * (1 if R_Tplus_h[0, 1] >= 0 else -1) - R_Tplus_h[1, 0] = R_Tplus_h[0, 1] - Rhat.append(R_Tplus_h) - - # Sample innovations - innov = np.zeros((n_days,)) - for t in range(n_days): - rho = R[t][0, 1] - innov[t] = (epsilon_firm[t] - epsilon_mkt[t] * rho) / np.sqrt(1 - rho**2) - sample = np.array([epsilon_mkt, innov]) # shape=(2,n_days) - - # Sample S*h pairs of standardized innovations - rng = np.random.RandomState(random_seed) - # sample.shape=(S,h,2) +from frds.algorithms import GJRGARCHModel, GJRGARCHModel_DCC + + +class LongRunMarginalExpectedShortfall: + """:doc:`/measures/long_run_mes`""" + + def __init__(self, firm_returns: np.ndarray, market_returns: np.ndarray) -> None: + """__init__ + + Args: + firm_returns (np.ndarray): ``(n_days,)`` array of the firm raw returns. + market_returns (np.ndarray): ``(n_days,)`` array of the market raw returns. + + .. note:: + + Raw returns should be used! They are automatically converted to log returns. + Do NOT use percentage returns. + """ + # Convert raw returns to log returns + self.firm_returns = np.log(1 + np.asarray(firm_returns, dtype=np.float64)) + self.market_returns = np.log(1 + np.asarray(market_returns, dtype=np.float64)) + assert self.firm_returns.shape == self.market_returns.shape + # Scale to percentage (log) returns + # This is for better (GJR)GARCH estimation + self.firm_returns *= 100 + self.market_returns *= 100 + self.firm_model = GJRGARCHModel(self.firm_returns) + self.market_model = GJRGARCHModel(self.market_returns) + self.dcc_model = GJRGARCHModel_DCC(self.firm_model, self.market_model) + + def estimate(self, h=22, S=10_000, C=-0.1, random_seed=42) -> float: + """h-step-ahead LRMES forecasts conditional on a systemic event of market decline C + + Args: + h (int, optional): h-period-ahead prediction horizon. Defaults to 22. + S (int, optional): sample size used in simulation to generate LRMES forecasts. Defaults to 10000. + C (float, optional): market decline used to define systemic event. Defaults to -0.1, i.e. -10%. + random_seed (int, optional): random seed. Defaults to 42. + + Returns: + float: the firm's LRMES forecast + """ + rng = np.random.default_rng(random_seed) + + # Estimate the (GJR)GARCH-DCC model + self.dcc_model.fit() + + # Construct GJR-GARCH-DCC standardized innovations for the sample + # See the step 1 of Computing LRMES section + firm_variances = self.firm_model.sigma2 + firm_resids = self.firm_model.resids + firm_mu = self.firm_model.parameters.mu + market_variances = self.market_model.sigma2 + market_resids = self.market_model.resids + market_mu = self.market_model.parameters.mu + # Conditional correlations + a, b = self.dcc_model.parameters.a, self.dcc_model.parameters.b + rho = self.dcc_model.conditional_correlations(a, b) + # Standarized residuals + z_m = (market_resids - market_mu) / np.sqrt(market_variances) + z_i = (firm_resids - firm_mu) / np.sqrt(firm_variances) + # Firm shock orthogonal to market + xi_i = (z_i - rho * z_m) / np.sqrt(1 - rho**2) + sample = np.array([xi_i, z_m]) + + # Sample with replacement S*h innovations sample = sample.T[rng.choice(sample.shape[1], (S, h), replace=True)] + assert sample.shape == (S, h, 2) + + firm_var = self.firm_model.sigma2[-1] + mkt_var = self.market_model.sigma2[-1] + a, b = self.dcc_model.parameters.a, self.dcc_model.parameters.b + rho = self.dcc_model.conditional_correlations(a, b)[-1] + Q_bar = np.cov(z_i, z_m) - # list of simulated firm total returns when there're systemic events - firm_total_return = [] + firm_avg_return = 0.0 + n_systemic_event = 0 for s in range(S): # Each simulation - inv = sample[s, :, :] # (h,2) - # mkt log return = mkt innovation * predicted mkt volatility - mktrets = np.multiply(inv[:, 0], mkt_vol_hat) - - # systemic event if over the prediction horizon, - # the market falls by more than C - systemic_event = np.exp(np.sum(mktrets)) - 1 < C - # no need to simulate firm returns if there is no systemic event - if not systemic_event: - continue - # when there is a systemic event - firmrets = np.zeros((h,)) - for _h in range(h): - mktinv, firminv = inv[_h][0], inv[_h][1] - # Simulated firm return at T+h - rho = Rhat[_h][0, 1] - firmret = firminv * np.sqrt(1 - rho**2) + rho * mktinv - firmret = firm_vol_hat[_h] * firmret - firmrets[_h] = firmret - # firm return over the horizon - firmret = np.exp(np.sum(firmrets)) - 1 - firm_total_return.append(firmret) - - # Store result - if len(firm_total_return): - lrmes[i] = np.mean(firm_total_return) - else: - lrmes[i] = np.nan - - return lrmes + inv = sample[s, :, :] # shape=(h,2) + assert inv.shape == (h, 2) + firm_return, systemic_event = self.simulation( + inv, C, firm_var, mkt_var, a, b, rho, Q_bar + ) + firm_avg_return += firm_return + n_systemic_event += systemic_event + + if n_systemic_event == 0.0: + return 0.0 + return firm_avg_return / n_systemic_event + + def simulation( + self, + innovation: np.ndarray, + C: float, + firm_var: float, + mkt_var: float, + a: float, + b: float, + rho: float, + Q_bar: np.ndarray, + ) -> Tuple[float, bool]: + """A simulation to compute the firm's return given the parameters. + This method should be used internally. + + Args: + innovation (np.ndarray): ``(h,2)`` array of market and firm innovations + C (float): market decline used to define systemic event. Defaults to -0.1, i.e. -10%. + firm_var (float): the firm conditional variance at time :math:`T`, used as starting value in forecast + mkt_var (float): the market conditional variance at time :math:`T`, used as starting value in forecast + a (float): DCC parameter + b (float): DCC parameter + rho (float): the last conditional correlation at time :math:`T`, used as starting value in forecast + Q_bar (np.ndarray): ``(2,2)`` array of sample correlation matrix of standarized residuals + + Returns: + Tuple[float, bool]: tuple of the firm return and whether a systemic event occurs + """ + q_i_bar = Q_bar[0, 0] + q_m_bar = Q_bar[1, 1] + q_im_bar = Q_bar[1, 0] + q_i, q_m, q_im = 1.0, 1.0, rho + + pi = self.firm_model.parameters + mu_i = pi.mu + omega_i, alpha_i, gamma_i, beta_i = pi.omega, pi.alpha, pi.gamma, pi.beta + + pm = self.market_model.parameters + mu_m = pm.mu + omega_m, alpha_m, gamma_m, beta_m = pm.omega, pm.alpha, pm.gamma, pm.beta + + firm_return = np.empty(shape=len(innovation)) + mkt_return = np.empty(shape=len(innovation)) + + for h in range(len(innovation)): + # Each iteration is a one-step-ahead forecast + firm_innov = innovation[h, 0] + firm_var = omega_i + alpha_i * firm_innov**2 + beta_i * firm_var + firm_var += gamma_i * firm_innov**2 if firm_innov < 0 else 0 + + mkt_innov = innovation[h, 1] + mkt_var = omega_m + alpha_m * mkt_innov**2 + beta_m * mkt_var + mkt_var += gamma_m * mkt_innov**2 if mkt_innov < 0 else 0 + + q_i = (1 - a - b) * q_i_bar + a * firm_innov**2 + b * q_i + q_m = (1 - a - b) * q_m_bar + a * mkt_innov**2 + b * q_m + q_im = (1 - a - b) * q_im_bar + a * firm_innov * mkt_innov + b * q_im + + rho = q_im / np.sqrt(q_i * q_m) + + mkt_return[h] = mu_m + rho * mkt_innov + firm_return[h] = mu_i + rho * firm_innov + + # Convert back to original scale + mkt_return /= 100 + firm_return /= 100 + + # systemic event if over the prediction horizon, + # the market falls by more than C + systemic_event = np.exp(np.sum(mkt_return)) - 1 < C + # no need to simulate firm returns if there is no systemic event + if not systemic_event: + return 0.0, False + + # firm return over the horizon + firmret = np.exp(np.sum(firm_return)) - 1 + + return firmret, True