Overview | |
---|---|
Open Source | |
Code | |
CI/CD | |
Downloads |
Table of Contents
Synthetica
is a versatile and robust tool for generating synthetic time series data. Whether you are engaged in financial modeling, IoT data simulation, or any project requiring realistic time series data to create correlated or uncorrelated signals, Synthetica
provides high-quality, customizable generated datasets. Leveraging advanced statistical techniques and machine learning algorithms, Synthetica
produces synthetic data that closely replicates the characteristics and patterns of real-world data.
The project latest version incorporates a wide array of models, offering an extensive toolkit for generating synthetic time series data. This version includes features like:
GeometricBrownianMotion
AutoRegressive
NARMA
Heston
CIR
LevyStable
MeanReverting
Merton
Poisson
Seasonal
However, the SyntheticaAdvenced
version elevates the capabilities further, integrating more sophisticated deep learning data-driven algorithms, such as TimeGAN
.
numpy = "^1.26.4"
pandas = "^2.2.2"
scipy = "^1.13.1"
$ pip install python-synthetica
Once you have cloned the repository, you can start using Synthetica
to generate synthetic time series data. Here are some initial steps to help you kickstart your exploration:
>>> import synthetica as sth
In this example, we are using the following parameters for illustration purposes:
length=252
: The length of the time seriesnum_paths=5
: The number of paths to generateseed=123
: Reseed thenumpy
singletonRandomState
instance for reproduction
Initialize the model: Using the GeometricBrownianMotion
(GBM) model: This approach initializes the model with a specified path length, number of paths, and a fixed random seed:
>>> model = sth.GeometricBrownianMotion(length=252, num_paths=5, seed=123)
Generate random signals: The transform method then generates the random signals accordingly:
>>> model.transform() # Generate random signals
Generate correlated paths: This process ensures that the resulting features are highly positively correlated, leveraging the Cholesky decomposition method to achieve the desired matrix
correlation structure:
>>> model.transform(matrix) # Produces highly positively correlated features
The Cholesky transformation (or Cholesky decomposition) is a mathematical technique used to decompose a positive definite matrix into the product of a lower triangular matrix and its transpose. This is particularly useful in various fields such as numerical analysis, optimization, and financial modeling:
- Numerical Stability: The Cholesky decomposition is more numerically stable than other decomposition methods for positive definite matrices.
- Solving Linear Systems: It is used to solve linear systems of equations efficiently.
- Simulating Correlated Random Variables: In finance and statistics, it is used to generate correlated random variables from uncorrelated ones.
Given a positive definite matrix
-
$A$ is a positive definite matrix. -
$L$ is a lower triangular matrix. -
$L^T$ is the transpose of$L$ .
In the context of synthetic data generation, the Cholesky transformation can be used to apply a correlation structure to a set of uncorrelated random variables. synthetica
uses np.linalg.cholesky
in the background
A covariance matrix is considered positive definite if it satisfies the following key properties:
- It is symmetric, meaning the matrix is equal to its transpose.
- For any non-zero vector
$x$ ,$x^T * C * x > 0$ , where$C$ is the covariance matrix and$x^T$ is the transpose of$x$ . - All of its eigenvalues are strictly positive.
Positive definiteness in a covariance matrix has important implications:
- It ensures the matrix is invertible, which is crucial for many statistical techniques.
- It guarantees that the matrix represents a valid probability distribution.
- It allows for unique solutions in optimization problems and ensures the stability of certain algorithms.
- It indicates that no linear combination of the variables has zero variance, meaning all variables contribute meaningful information.
A covariance matrix that is positive semi-definite (allowing for eigenvalues to be non-negative rather than strictly positive) is still valid, but may indicate linear dependencies among variables.
In practice, sample covariance matrices are often positive definite if the number of observations exceeds the number of variables and there are no perfect linear relationships among the variables.
synthetica
automatically finds the nearest positive-definite matrix to input using nearest_positive_definite
python function. it is directly sourced from Computing a nearest symmetric positive semidefinite matrix.
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the BSD-3 License. See LICENSE.txt
for more information.