Are you looking to expand your understanding of mathematics and statistics? If so, you'll want to check out the Github repository I'm introducing.
This repository is a collection of various mathematical and statistical proofs, related to Machine Learning, Deep Learning and Statitical Learning. I hope this repository acts as useful resource for anyone who wants to explore the underlying principles of math and statistics, as it contains an array of proofs and examples that cover a broad range of topics. The repository includes proofs for various concepts, such as Maximum Likelihood Estimate, Bayesian inference, Regression/Splines, Monte Carlo Markov Chains, Hidden Markov model etc. A complete list of all the items is given below.
Whether you're a student, researcher, or enthusiast, this repository has something for everyone, and I hope you'll find it to be an valuable resource in your pursuit of knowledge.
- Proof of M-step of GMM
- Marginal density for Jointly distributed Normal
- Conditional probability density for Jointly distributed Normal
- Moment Generating function
- Maximum Likelihood Estimate for Uniform distribution and it's bias
- Weighted Linear Regression
- NN and Backpropgation
- Bayesian Posterior of $$\mu|x$$ when $$\mu \sim \mathcal{N}(\theta, \tau^2)$$ and $$x \sim \mathcal{N}(\mu, \sigma^2)$$
- Soft Thresholding for Lasso Regression
- Maximum Likelihood Estimate for Normal distribution and it's bias
- Rank of Product of matrices
- Updates for k-means algorithm
- Expressions for cubic spline
- Forward Backward algorithm for HMM
- Viterbi algorithm for HMM
- PCA as minimising MSE b/w projections and original points
- Bootstrapping example
- Optimal $\alpha_t$ for AdaBoost
- Importance Sampling proof and example
- Box Muller methods
- Bayesian inference and Monte Carlo
- Metropolis Hastings code and example
- Bayesian inference of posterior with conditional independence
- Exact Sampling from Gaussian Random Field
- KL expansion for sampling from Gaussian Random Field
- MCMC examples: Metropolis Hastings, Langevin Dynamics, Hamiltonian Monte Carlo
- KL divergence is non negative
- Evidence Lower Bound and KL divergence
Note: A lot of the above contents are from deliverables from various courses I took at Georgia Tech, for ex:
- ISYE 6416: Computational Statistics
- CSE 8803: Introduction to Uncertainty Quantification
- CS 7641: Machine Learning
Please cite the work if you find it useful :)
@misc{ashish_dhiman_2023_7905371,
author = {Ashish Dhiman},
title = {ashish1610dhiman/math\_proofs: v1.0.0},
month = may,
year = 2023,
publisher = {Zenodo},
version = {v1},
doi = {10.5281/zenodo.7905371},
url = {https://doi.org/10.5281/zenodo.7905371}
}