Skip to content

prashanth018/GSoC-21

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 

Repository files navigation

Google Summer Of Code 2021 with Chapel


Google Summer of Code 2021 Final Report

This article summaries the work I did during my summer as being a participant of Google Summer of Code program with Chapel organization. I have to say that my association with Chapel has started as early as January and I started working on the Matrix Exponential Project from April. Following is a short report summary of the work, following which, I tried detailing each PR.

Report Summary

Here is the list of commits that I have made relevant to this Project. You can also find my Project Proposal here.

What's merged:

  1. Matrix exponentiation using Scaling and Squaring Algorithm
  2. Matrix Exponentials - Performance & LAPACK based solvePQ
  3. Sparse-Dense Matrix Multiplication

Yet to merge:

  1. one-norm estimate & Tests
  2. Action method & Tests

Issues created/discovered:

  1. oneNormEst method should support matrices with eltType=complex
  2. Efficient way to handle Sparse matrix inputs for Matrix Exponentials
  3. Functionality to Multiply a Sparse and a Dense Matrix
  4. Addition of NumPy like methods in Linear Algebra module
  5. Handling array views correctly in the LinearAlgebra module
  6. Add LAPACK implementation for solve method in LinearAlgebra module

Description

Background

Ideally, exponential of matrix is just an application of taylor series expansion of exponential to a matrix. We could have gotten away by just plugging-in matrix in the taylor series expansion of exponentials. This taylor series expansion requires us to take powers of input matrix i.e., A^2,A^4,A^6,A^8,A^10.

Optimization Question: Do we need to take all the above specified matrix powers? Can we trade off some precision? Can we improve performance by caching the results?

Research: There is a paper by Awad H. Al-Mohy which introduces exactDs and approxDs a.k.a D values of a Matrix. exactDs are computed using exact onenorms and approxDs are computed using estimate of onenorms. This paper finds a relation between the D values of a matrix and the precision. Something like, if max of D4 and D6 doesn't exceed 1.495585 then we can get away by only computing pade3 which internally computes only A^2,A^4. We also make use of Lazy computation and caching of matrices. ExpmPadeHelper class helps us achieve all of this.

Matrix exponentiation using Scaling and Squaring Algorithm

PR: #17523
This PR incorporates the implementation of expm method which takes in a Square Matrix A as an input and returns the exponential of the Matrix. Input matrix can be of any data type. This PR also incorporates the cosm and sinm functionality which returns the cos and sin of the input matrix respectively. This PR includes sincos functionality which just calls sinm and cosm internally and returns the sin and cos of a Matrix. This PR also includes tests for the expm, sinm and cosm, sincos tested over various input matrices like: Gradient Matrix, Edge Detection Matrix, Identity Matrix and various other forms of Matrices over various other data types. This PR especially incorporates ExpmPadeHelper class which is the crux of expm.

Matrix Exponentials - Performance & LAPACK based solvePQ

PR: #17966
Calculation of exponential of a Matrix would eventually require a system of Linear Equations to be solved. This PR optimizes the usage of native solve method and this PR also adds functionality to use LAPACK.gesv which is a Lapack routine to solve system of linear equations. This PR also adds the performance tests for expm, sinm, cosm, sincos.

Sparse-Dense Matrix Multiplication

PR: #18152
Chapel doesn't allow the multiplication of a Sparse and a Dense Matrix. This PR incorporates a method sparseDenseMatmut to carry out that functionality. dot method in Chapel now supports product of a Sparse and a Dense Matrix. This PR also includes tests for this method.

one-norm estimate and Tests

PR: #18149
This PR includes functionality to estimate the onenorms of a Matrix. The function oneNormEst does this and can be used as an alternative to norm function. norm function runs in O(n^2) time while, oneNormEst takes O(k.N) time. This PR also includes many helper functions such as absSum, maxAlongAxis, ones, zeros, elementaryVector, argsort, everyColOfXParallelToColOfY, signRoundUp, columnNeedsResampling, columnResample, sparseDenseMatmul, getDense which are all private methods and aid the functionality of oneNormEst method. Besides, this PR incorporates major refactoring of ExpmPadeHelper class, making it modular and readable. This PR also includes documentation wherever required and includes tests for Sparse matrices.

Action method and Test

PR: #18293
This PR includes functionality to find the action of a matrix's exponential on a vector or another matrix of rank==2. The method expmv achieves this functionality.

Miscellaneous

PRs

  1. Adding checksumming to mason
  2. Changes to choice method of Random module to support sampling on N-Dimensions
  3. Chpldoc not properly generated for few usecases

Issues

  1. Make Random.choice() method support multi-dimensional array

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published