Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve documentation #110

Merged
merged 13 commits into from
May 8, 2024
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@

**Russell** (Rust Scientific Library) assists in developing high-performance computations involving linear algebra, sparse linear systems, differential equations, statistics, and continuum mechanics using the Rust programming language. The applications built with Russell revolve around the computational mechanics discipline; however, since Russell deals with fundamental mathematics and numerics, it is also helpful for other disciplines.


Russell aims to deliver efficient, reliable, and easy-to-maintain code. Thus, Russell implements several unit and integration tests and requires test coverage to be over 95%. For the sake of code maintenance, Russell avoids overcomplicated Rust constructions. Nonetheless, Russell considers a good range of Rust concepts, such as generics and traits, and convenient/powerful constructs, such as enums, options, and results. Another goal of Russell is to publish examples of all computations in the documentation to assist the user/developer.

Available libraries:

Expand Down
3,128 changes: 3,128 additions & 0 deletions russell_lab/data/figures/vector-matrix.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 13 additions & 10 deletions russell_lab/examples/algo_lorene_1d_pde_spectral_collocation.rs
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ const PATH_KEY: &str = "/tmp/russell_lab/pde_1d_lorene_spectral_collocation";
/// * Gourgoulhon E (2005), An introduction to polynomial interpolation,
/// School on spectral methods: Application to General Relativity and Field Theory
/// Meudon, 14-18 November 2005
#[rustfmt::skip]
fn main() -> Result<(), StrError> {
// interpolant
let nn = 16;
let params = InterpParams::new();
let mut interp = InterpLagrange::new(nn, Some(params))?;
let mut interp = InterpLagrange::new(nn, None)?;

// D1 and D2 matrices
interp.calc_dd1_matrix();
Expand All @@ -54,32 +54,35 @@ fn main() -> Result<(), StrError> {
let dd2 = interp.get_dd2()?;

// source term (right-hand side)
const C: f64 = -4.0 * NAPIER / (1.0 + NAPIER*NAPIER);
let xx = interp.get_points();
let npoint = xx.dim();
let mut b = Vector::initialized(npoint, |i| f64::exp(xx[i]) + CC);
b[0] = 0.0;
b[nn] = 0.0;
let mut b = Vector::initialized(
npoint, |i| f64::exp(xx[i]) + C,
);
b[0] = 0.0; // BCs
b[nn] = 0.0; // BCs

// discrete differential operator
let mut aa = Matrix::new(npoint, npoint);
for i in 1..nn {
for j in 1..nn {
let o = if i == j { 1.0 } else { 0.0 };
aa.set(i, j, dd2.get(i, j) - 4.0 * dd1.get(i, j) + 4.0 * o);
aa.set(i, j, dd2.get(i, j)
- 4.0 * dd1.get(i, j) + 4.0 * o);
}
}
aa.set(0, 0, 1.0);
aa.set(nn, nn, 1.0);
aa.set(0, 0, 1.0); // BCs
aa.set(nn, nn, 1.0); // BCs

// linear system
solve_lin_sys(&mut b, &mut aa)?;
let uu = &b;

// analytical solution
const CC: f64 = -4.0 * NAPIER / (1.0 + NAPIER * NAPIER);
let sh1 = f64::sinh(1.0);
let sh2 = f64::sinh(2.0);
let analytical = |x| f64::exp(x) - f64::exp(2.0 * x) * sh1 / sh2 + CC / 4.0;
let analytical = |x| f64::exp(x) - f64::exp(2.0 * x) * sh1 / sh2 + C / 4.0;

// error at nodes
let uu_ana = xx.get_mapped(analytical);
Expand Down
182 changes: 182 additions & 0 deletions russell_lab/examples/algo_lorene_1d_pde_spectral_errors.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
use plotpy::{Curve, Plot};
use russell_lab::algo::{InterpGrid, InterpLagrange, InterpParams};
use russell_lab::math::NAPIER;
use russell_lab::{solve_lin_sys, Matrix, NoArgs, StrError, Vector};

const PATH_KEY: &str = "/tmp/russell_lab/pde_1d_lorene_spectral_errors";

/// Runs the simulation
///
/// This example corresponds to LORENE's example on page 25 of the Reference.
///
/// PDE:
///
/// ```text
/// d²u du x
/// ——— - 4 —— + 4 u = e + C
/// dx² dx
///
/// -4 e
/// C = ——————
/// 1 + e²
///
/// x ∈ [-1, 1]
/// ```
///
/// Boundary conditions:
///
/// ```text
/// u(-1) = 0 and u(1) = 0
/// ```
///
/// Solution:
///
/// ```text
/// x sinh(1) 2x C
/// u(x) = e - ——————— e + —
/// sinh(2) 4
/// ```
///
/// # Input
///
/// * `nn` -- polynomial degree `N`
/// * `grid_type` -- the type of grid
/// * `do_plot` -- generate plot
/// * `calc_error` -- calculate the interpolation error
///
/// # Output
///
/// * `err_f` -- the interpolation error
///
/// # Reference
///
/// * Gourgoulhon E (2005), An introduction to polynomial interpolation,
/// School on spectral methods: Application to General Relativity and Field Theory
/// Meudon, 14-18 November 2005
fn run(nn: usize, grid_type: InterpGrid, do_plot: bool, calc_error: bool) -> Result<f64, StrError> {
// interpolant
let mut params = InterpParams::new();
params.grid_type = grid_type;
let mut interp = InterpLagrange::new(nn, Some(params))?;

// D1 and D2 matrices
interp.calc_dd1_matrix();
interp.calc_dd2_matrix();
let dd1 = interp.get_dd1()?;
let dd2 = interp.get_dd2()?;

// source term (right-hand side)
let xx = interp.get_points();
let npoint = xx.dim();
let mut b = Vector::initialized(npoint, |i| f64::exp(xx[i]) + CC);
b[0] = 0.0;
b[nn] = 0.0;

// discrete differential operator
let mut aa = Matrix::new(npoint, npoint);
for i in 1..nn {
for j in 1..nn {
let o = if i == j { 1.0 } else { 0.0 };
aa.set(i, j, dd2.get(i, j) - 4.0 * dd1.get(i, j) + 4.0 * o);
}
}
aa.set(0, 0, 1.0);
aa.set(nn, nn, 1.0);

// linear system
solve_lin_sys(&mut b, &mut aa)?;
let uu = &b;

// analytical solution
const CC: f64 = -4.0 * NAPIER / (1.0 + NAPIER * NAPIER);
let sh1 = f64::sinh(1.0);
let sh2 = f64::sinh(2.0);
let analytical = |x, _: &mut NoArgs| Ok(f64::exp(x) - f64::exp(2.0 * x) * sh1 / sh2 + CC / 4.0);

// plot
let aa = &mut 0;
if do_plot {
let mut curve_num1 = Curve::new();
let mut curve_num2 = Curve::new();
let mut curve_ana = Curve::new();
let xx_plt = Vector::linspace(-1.0, 1.0, 201)?;
let yy_num = xx_plt.get_mapped(|x| interp.eval(x, uu).unwrap());
let yy_ana = xx_plt.get_mapped(|x| analytical(x, aa).unwrap());
curve_ana
.set_label("analytical")
.draw(xx_plt.as_data(), yy_ana.as_data());
curve_num1
.set_label("numerical")
.draw(xx_plt.as_data(), yy_num.as_data());
curve_num2
.set_line_style("None")
.set_marker_style("o")
.set_marker_void(true)
.draw(xx.as_data(), uu.as_data());
let mut plot = Plot::new();
plot.add(&curve_ana)
.add(&curve_num1)
.add(&curve_num2)
.legend()
.grid_and_labels("$x$", "$u(x)$")
.set_title(format!("N = {}", nn).as_str())
.set_figure_size_points(500.0, 300.0)
.save(format!("{}.svg", PATH_KEY).as_str())?;
}

// done
let error = if calc_error {
interp.estimate_max_error(aa, analytical)?
} else {
0.0
};
Ok(error)
}

fn main() -> Result<(), StrError> {
// compare with LORENE
run(4, InterpGrid::ChebyshevGaussLobatto, true, false)?;

// grid types
let grid_types = [
InterpGrid::Uniform,
InterpGrid::ChebyshevGauss,
InterpGrid::ChebyshevGaussLobatto,
];

// error analysis
println!("\n... error analysis ...");
let mut curve_f = Curve::new();
for grid_type in &grid_types {
let gt = format!("{:?}", grid_type);
curve_f.set_label(&gt);
match grid_type {
InterpGrid::Uniform => {
curve_f.set_line_style(":");
}
InterpGrid::ChebyshevGauss => {
curve_f.set_line_style("--");
}
InterpGrid::ChebyshevGaussLobatto => {
curve_f.set_line_style("-");
}
};
let mut nn_values = Vec::new();
let mut ef_values = Vec::new();
for nn in (4..40).step_by(2) {
let err_f = run(nn, *grid_type, false, true)?;
nn_values.push(nn as f64);
ef_values.push(err_f);
}
curve_f.draw(&nn_values, &ef_values);
let mut plot = Plot::new();
plot.set_log_y(true)
.add(&curve_f)
.legend()
.grid_and_labels("$N$", "$error$")
.set_figure_size_points(500.0, 300.0)
.save(format!("{}_errors.svg", PATH_KEY).as_str())?;
}
println!("... done ...");
Ok(())
}
53 changes: 50 additions & 3 deletions russell_lab/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
//!
//! # Introduction
//!
//! This crate implements specialized mathematical functions (e.g., Bessel, Chebyshev, Erf, Gamma, Heaviside, Logistic, ...) and functions to perform linear algebra computations (e.g., Matrix, Vector, Matrix-Vector, Eigen-decomposition, SVD, Inverse, ...). This crate also implements a set of helpful function for comparing floating-point numbers, measuring computer time, reading table-formatted data, and more.
//! This crate extends Rust with scientific computing capabilities, such as special mathematical functions (e.g., Bessel, beta, elliptic integral, erf, erfc, gamma), auxiliary mathematical functions (e.g., ramp, Heaviside, boxcar, logistic), numeric constants (e.g., SQRT_PI, EULER, NAPIER), numerical differentiation (e.g., to check analytical derivatives), functions to validate calculations, code performance measurement, least-square fitting, reading/writing table-formatted data, generation of grid coordinates for plotting, interpolation, and others. This crate also implements several functions to perform linear algebra computations (e.g., Matrix, Vector, Matrix-Vector, Eigen-decomposition, SVD, Inverse, and more).
//!
//! The code shall be implemented in *native Rust* code as much as possible. However, thin interfaces ("wrappers") are implemented for some of the best tools available in numerical mathematics, including [OpenBLAS](https://github.com/OpenMathLib/OpenBLAS) and [Intel MKL](https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-2/overview.html).
//!
Expand All @@ -24,6 +24,45 @@
//!
//! For linear algebra, the main structures are [NumVector] and [NumMatrix], that are generic Vector and Matrix structures. The Matrix data is stored as **column-major**. The [Vector] and [Matrix] are `f64` and `Complex64` aliases of `NumVector` and `NumMatrix`, respectively.
//!
//! The figure below illustrates the base structures (struct) for linear algebra computations. The vector and matrix structures are generic but require the type parameter to implement a set of traits. The essential trait is the `Num` trait provided by the `num-traits` crate. This trait allows vectors and matrices in Russell to hold any numerical type. The `Copy` trait enables the possibility of cloning vectors and matrices directly; however, for efficiency, the BLAS functions can be called instead. `DeserializedOwned` and `Serialize` allow vectors and matrices to be serialized/deserialized, a feature that propagates for all other structures depending on `NumVector` and `NumMatrix`. This feature allows, for instance, writing and reading JSON files with simulation results. `NumCast` is required by `NumVector` because the `linspace` member function needs to cast the number of grid points to the final type (e.g., `f64`). `NumMatrix` requires the `AddAssign` and `MulAssign` traits to implement the convenience member functions named `add` and `mul`.
//!
//! ![Vector-Matrix](https://raw.githubusercontent.com/cpmech/russell/main/russell_lab/data/figures/vector-matrix.svg)
//!
//! `NumVector` has a single data member named `data` (a `Vec<T>`) holding all elements. Thus, `NumVector` wraps the native Rust vector struct. `NumVector` implements various methods, including functions for constructing and transforming vectors. One helpful method is `linspace` to generate a sequence of equally spaced numbers. `NumVector` implements the `Index` trait, allowing the access of elements using the `[i]` notation where `i` is the index.
//!
//! `NumMatrix` stores all entries in the *column-major* representation. The main reason for using the column-major representation is to make the code work directly with Fortran routines, such as the BLAS/LAPACK functions. It is worth noting that BLAS/LAPACK have functions that accept row-major matrices. Nonetheless, these functions add an overhead due to temporary memory allocation and copies, including transposing matrices.
//!
//! Thus, column-major data storage is essential. However, this requirement yields an undesirable side effect. Unfortunately, it is no longer possible to use the Index trait to access matrix elements using a notation such as `a[i][j]` as in other languages. Alternatively, `russell_lab` implements member functions `get(i, j)` and `set(i, j, value)` to access and update matrix elements.
//!
//! `NumMatrix` also implements the `AsArray2D` trait, allowing matrices to be constructed from fixed-size nested lists of elements (stack-allocated), growable nested arrays of type `Vec<Vec<T>>` (heap-allocated) and slices that are views into an array. For example, matrices can be created as follows:
//!
//! ```
//! use russell_lab::{NumMatrix, StrError};
//!
//! fn main() -> Result<(), StrError> {
//! let a = NumMatrix::<i32>::from(&[
//! [1, 2, 3], //
//! [4, 5, 6], //
//! [7, 8, 9], //
//! ]);
//! println!("{}", a);
//! assert_eq!(
//! format!("{}", a),
//! "┌ ┐\n\
//! │ 1 2 3 │\n\
//! │ 4 5 6 │\n\
//! │ 7 8 9 │\n\
//! └ ┘"
//! );
//! assert_eq!(a.as_data(), &[1, 4, 7, 2, 5, 8, 3, 6, 9]);
//! Ok(())
//! }
//! ```
//!
//! Finally, `NumVector` and `NumMatrix` implement the `Display` trait, allowing them to be pretty printed.
//!
//! ### Functionality
//!
//! The linear algebra functions currently handle only `(f64, i32)` pairs, i.e., accessing the `(double, int)` C functions. We also consider `(Complex64, i32)` pairs.
//!
//! There are many functions for linear algebra, such as (for Real and Complex types):
Expand All @@ -32,8 +71,16 @@
//! * Matrix addition ([mat_add()]), multiplication ([mat_mat_mul()], [mat_t_mat_mul()]), copy ([mat_copy()]), singular-value decomposition ([mat_svd()]), eigenvalues ([mat_eigen()], [mat_eigen_sym()]), pseudo-inverse ([mat_pseudo_inverse()]), inverse ([mat_inverse()]), norms ([mat_norm()]), and more
//! * Matrix-vector multiplication ([mat_vec_mul()])
//! * Solution of dense linear systems with symmetric ([mat_cholesky()]) or non-symmetric ([solve_lin_sys()]) coefficient matrices
//! * Reading writing files ([read_table()]) , linspace ([NumVector::linspace()]), grid generators ([generate2d()]), [generate3d()]), [Stopwatch] and more
//! * Checking results, comparing floating point numbers, and verifying the correctness of derivatives; see [crate::check]
//!
//! The `russell_lab` functions are higher-level than the BLAS/LAPACK counterparts, thus losing some of the generality of BLAS/LAPACK. Each BLAS/LAPACK function wrapped by `russell_lab` is carefully documented and thoroughly tested.
//!
//! `russell_lab` implements functions organized in the `vector`, `matvec`, and `matrix` directories. These directories correspond to the BLAS terminology as Level 1, Level 2, and Level 3.
//!
//! All vector functions are prefixed with `vec_` and `complex_vec_`, whereas all matrix functions are prefixed with `mat_` and `complex_mat_`. The `matvec` functions have varied names, albeit descriptive.
//!
//! `russell_lab` implements several other interfaces to BLAS/LAPACK. However, some functions are complemented with native Rust code to provide a higher-level interface or improve the functionality. For instance, after calling the DGEEV function, `mat_eigen` post-processes the results via the `dgeev_data` function to extract the results from LAPACK's compact representation, generating a more convenient interface to the user.
//!
//! An example of added functionality is the `mat_pseudo_inverse` function, which computes the pseudo-inverse of a rectangular matrix using the singular-value decomposition. This function is based on the singular-value decomposition routine provided by LAPACK.
//!
//! ## Complex numbers
//!
Expand Down
10 changes: 8 additions & 2 deletions russell_ode/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,15 @@ _This crate is part of [Russell - Rust Scientific Library](https://github.com/cp

## Introduction

This library implements (in pure Rust) solvers to ordinary differential equations (ODEs) and differential algebraic equations (DAEs). Specifically, this library implements several explicit Runge-Kutta methods (e.g., Dormand-Prince formulae) and two implicit Runge-Kutta methods, namely the Backward Euler and the Radau IIA of fifth-order (aka Radau5). The Radau5 solver is able to solver DAEs of Index-1, by accepting the so-called *mass matrix*.
This library implements (in native Rust) solvers for ordinary differential equations (ODEs) and differential algebraic equations (DAEs). Specifically, it implements several explicit Runge-Kutta methods (e.g., Dormand-Prince formulae) and two implicit Runge-Kutta methods, namely the Backward Euler and the Radau IIA of fifth-order (aka Radau5). The Radau5 solver can solve DAEs of Index-1 by accepting the so-called mass matrix.

The code in this library is based on Hairer-Nørsett-Wanner books and respective Fortran codes (see references [1] and [2]). The code for Dormand-Prince 5(4) and Dormand-Prince 8(5,3) are fairly different from the Fortran counterparts. The code for Radau5 follows closely reference [2]; however some small differences are considered. Despite the coding differences, the numeric results match the Fortran results quite well.
The code in this library is based on Hairer-Nørsett-Wanner books and respective Fortran codes (see references [1] and [2]). However, the implementations of Dormand-Prince 5(4) and Dormand-Prince 8(5,3) are different from the Fortran counterparts. The code for Radau5 follows closely reference [2]; however, some minor differences are considered. Despite the coding differences, the numeric results match the Fortran results quite well.

The recommended methods are:

* `DoPri5` for ODE systems and non-stiff problems using moderate tolerances
* `DoPri8` for ODE systems and non-stiff problems using strict tolerances
* `Radau5` for ODE and DAE systems, possibly stiff, with moderate to strict tolerances

The ODE/DAE system can be easily defined using the System data structure; [see the examples below](#examples).

Expand Down
Loading