Skip to content

Based on the pyCUDA enabled Metropolis MCMC for spintronics simulations.

License

Notifications You must be signed in to change notification settings

arkavo/CUDA-METRO

Repository files navigation

CUDA METRO

Description/Installation

A pyCUDA based Metropolis Monte Carlo simulator for 2D-magnetism systems. This code uses NVIDIA CUDA architecture wrapped by a simple python wrapper to work.

To use, clone the repository using


git clone https://github.com/arkavo/CUDA-METRO.git

into a directory of choice. Creating a new python3 environment is recommended to run the code.

To setup the environment after creation, simply run pip install -r requirements.pip

Some template codes have already been provided under folder /src along with sample input files in /inputs and running configurations in /configs

There are 4 seperate files to design your simulations. To run a simple simulation, use python main.py <input_file_name>.

Custom input files

If one wants to create their own input file from scratch, just copy this json template at your convenience:

{
    "Single_Mat_Flag" : 1 if single material 0 otherwise (int),
    "Animation_Flags" : DEPRECATED,
    "DMI_Flag"        : 1 if simulation in DMI mode 0 otherwise (int),
    "TC_Flag"         : 1 if simulation is in Critical Temperature mode 0 otherwise (int),
    "Static_T_Flag"   : 1 if simulation has a single temperature 0 otherwise (int),
    "FM_Flag"         : 1 if starting state is FM 0 otherwise,
    "Input_flag"      : 1 if starting from a specified <input.npy> state 0 otherwise,
    "Input_File"      : String value(use quotes please) for starting state file name (if applicable),
    "Temps"           : Array form of temperatures used for simulation (only in Critical Temperature mode),
    "Material"        : Material name (omit the ".csv"),
    "Multiple_Materials" : File name with multiple materials (omit the ".csv"),
    "SIZE"    :  Lattice size (int),
    "Box"       : DEPRECATED,
    "Blocks"    : How much to parallelize (int),
    "Threads"   : 2 (dont change this),
    "B"         : external magnetic field (use quotes, double),
    "stability_runs"    : MC Phase 1 batch runs,
    "calculation_runs"  : MC Phase 2 batch runs,
    "stability_wrap"    : MC Phase 1 batch size,
    "calculation_wrap"  : MC Phase 2 batch size,
    "Cmpl_Flag"         : DEPRECATED,
    "Prefix"            : String value to appear in FRONT of output folder
}

NOTE 1: The simulator can work in either Critical Temperature or DMI mode, because of the different Hamiltonians, while it is easy to see its an OR condition, please do not attempt to use 1 on both.

NOTE 2: The total number of raw MC steps will be Blocks x (MC Phase 1 runs x MC Phase 1 size + MC Phase 2 runs x MC Phase 2 size). We typically divide the phases to study the Critical temperature, since that typically gives the simulation time to settle down in Phase 1 and then find out the statistical properties in Phase 2(which is our data collection phase). For any raw simulation, where the evolution of states are required from start to finish, one may keep any one phase and omit the other.

Functions

A template file is given as main.py, import the requisite 2 libraries to work as construct and montecarlo.

MonteCarlo is a class object which is defined in construct.py as the Main code with montecarlo.py having all the Hamiltonian constructs as the GPU kernel (written in CUDA cpp).

Construct MonteCarlo

construct.MonteCarlo is the MonteCarlo class construct.

construct.MonteCarlo.mc_init() initializes the simulation (but does not run it yet).

construct.MonteCarlo.display_material() prints out the current material properties.

construct.MonteCarlo.grid_reset() resets the grid to ALL $(0,0,1)$ if FM_Flag=1 else randomizes all spins.

construct.MonteCarlo.generate_random_numbers(int size) creates 4 GPU processed arrays of size size using the pyCUDA XORWOW random number generator.

construct.MonteCarlo.generate_ising_numbers(int size) creates 4 GPU processed arrays of size size using the pyCUDA XORWOW random number generator but the spin vectors are either $(0,0,1)$ or $(0,0,-1)$.

construct.MonteCarlo.run_mc_dmi_66612(double T) runs a single Phase 1 batch size run, with the output as a np.array(NxNx3) individual spin directions as raw output. This can be saved using the np.save command.

Other variations of run_mc_dmi_66612(T) are run_mc_<tc/dmi>_<mode>(T)

tc and dmi mode both contain modes for 66612,4448,3636,2424 and 2242, which are the primary lattice types explored in this code. dmi can only be invoked by the configs 66612,4448,3636, for the rest, if you wish to open a running simulation, use tc mode with single temperature.

Construct Analyze

The template file to run a visual analyzer is given in visualize.py, this will compile images for all given state *.npy files in a given folder.

construct.Analyze(<Folder name>, reverse=False) to create the Analyzer instance with an option to start from the opposite end (in case you only want end results)

construct.Analyze.spin_view() creates a subfolder in the main folder called "spins" with the spin vector images inside. They are split into components as $z = s(x,y)$ as the functional form.

construct.Ananlyze.quiver_view() creates a subfolder in the main folder called "quiver" with the spin vector images inside. They show only the planar(xy) part of the spins on a flat surface. This is useful for finding patterns in results.

Working Principle

We consider a lattice system with a periodic arrangement of atoms, where each atom is represented by a 3D spin vector. This atomistic spin model is founded on the spin Hamiltonian, which delineates the essential spin-dependent interactions at the atomic scale, excluding the influences of potential and kinetic energy and electron correlations. The spin Hamiltonian is conventionally articulated as

$$ H_i= -\sum_j J_1s_i\cdot s_j - \sum_j K^x_1 s^x_i s^x_j-\sum_j K^y_1 s^y_i s^y_j-\sum_j K^z_1 s^z_i s^z_j-\sum_k J_2 s_i\cdot s_k - \sum_k K^x_2 s^x_i s^x_k $$ $$ -\sum_k K^y_2 s^y_i s^y_k -\sum_k K^z_2 s^z_i s^z_k -\sum_l J_3s_i\cdot s_l - \sum_l K^x_3 s^x_i s^x_l-\sum_l K^y_3 s^y_i s^y_l-\sum_l K^z_3 s^z_i s^z_l $$ $$ -\sum_m J_4s_i\cdot s_m - \sum_m K^x_4 s^x_i s^x_m-\sum_m K^y_4 s^y_i s^y_m -\sum_m K^z_4 s^z_i s^z_m - A s_i \cdot s_i-\sum_j \lambda(s_i\cdot s_j)^2 $$ $$ -\sum_j D_{ij}\cdot (s_i \times s_j) -\mu B \cdot s_i $$

Where $J$ is the isotropic exchange parameter, the $K$s are the anisotropic exchange parameters, with the superscript denoting the spin direction, $A$ is the single ion exchange parameter, $\lambda$ is the biquadratic parameter, $D$ is the Dyzaloshinskii-Moriya Interaction(DMI) parameter. $\mu$ is the dipole moment of a single atom and $B$ is the external magnetic field. $s_i,s_j$ are individual atomic spin vectors. ${s_j}$ are the first set of neighbours, ${s_k}$ are the second set of neighbours and so on. The subscripts below all $J$s and $K$s denote the neighbour set, $J_1$ denotes the first neighbours, $J_2$ the second and so on. From this equation, we see that energy of an atom depends on its interactions with the neighbours. In our code, we have limited the number of neighbour sets to be 4 since it is expected for 2D materials that the interaction energy dies down beyond that. All these above parameters except $B$ are material specific parameters that are the inputs to our MC code.

Results

First, we report the "meron" and "anti-meron", which are structures with topological numbers $-1/2$ and $+1/2$ respectively observed in $CrCl_3$ by LLG equations of spin dynamics. The results of our simulation are shown in Fig 2. $CrCl_3$ has a honeycomb lattice structure and for this simulation, we have considered up to the third nearest neighbour with biquadratic exchange. We can see all 4 types of meron and anti-meron structures as found in the previous report[@augustin_properties_2021]. This simulation was conducted in a $500 \times 500(143 \times 143 nm^2)$ supercell and took 300s to stabilize these topological spin textures at a parallelization of $3%$ conducted on an A100-SXM4 processor.

Secondly, we simulate skyrmions in $MnBr_2$[@acs_nanolett] as shown in Fig 2. $MnBr_2$ is a square lattice and for this simulation, we have considered up to the second nearest neighbour. This material exhibits anisotropic DMI with an anti-ferromagnetic ground state. An anti-ferromagnetic skyrmion spin texture is accurately reproduced in our simulation. Anti-ferromagnetic skyrmions are technologically important since they do not exhibit skyrmion Hall effect. We further study the material $CrInSe_3$ [@du_spontaneous_2022] which has a hexagonal lattice. This simulation was conducted considering only the nearest neighbours and the formation of skyrmions is shown in Fig 3. Once again our results are in agreement with the original report. All these simulations were conducted in a $200 \times 200(49 \times 49nm^2)$ supercell and took 30s to stabilize these topological spin textures at a parallelization of $20%$ conducted on a V100-SXM2 processor.

In Fig 4 we demonstrate the skyrmion neucleation process for the material $MnSTe$ [@liang_very_2020], which has a hexagonal lattice. While we first observe several skyrmions, with evolving MCS, they disappear and the whole lattice eventually becomes uniformly ferromagnetic,which happens to be the direction of the applied magnetic field. This has not been reported in the original literature[@liang_very_2020], possibly because of the high computational time required for a traditional SSU scheme.

In Fig 5, we further show a similar life cycle evolution for a giant skyrmion of diameter $21nm$ hosted in the material $VZr_3C_3II$ [@kabiraj_realizing_2023]. To host such a large skyrmion, the simulation was conducted in a supercell of size $750\times 750$ with a parallelization ratio of $1%$ utilizing $70%$ VRAM of an A100-SXM4 GPU. As mentioned before, our parallelization is limited by the number of CUDA cores and so we cannot go more than $1%$ parallelization for this simulation. However, even with this low parallelization ratio, we can still access 8000 lattice points simultaneously and by careful tuning of our parameter $\Gamma$, we can observe the ground state of a $750\times750$ supercell in $9$ hours using an A100-SXM4 GPU. The formation of the skyrmion roughly takes $100$ mins.

Figure 1 Fig 1: Discrepancy between simulation and reference results at differing levels of parallelization. At $10%$, the simulation results are almost indistinguishable from the reference data.

Figure 2 Fig 2: Presence of anti-merons and merons in $CrCl_3$. The color bar represents normalized spin vectors in the z direction.

Figure 3 Fig 3: Presence of skyrmions in $MnBr_2$ and $CrInSe_3$. The color bar represents normalized spin vectors in the z direction. Note that the spins of $MnBr_2$ appear purple because there are "red-blue" spin pairs for the vast majority.

Figure 4 Fig 4: Lifetime of a skyrmion in $MnSTe$, from its creation to annihilation. The graph denotes the average energy per atom. As we approach the global minima, the entire field becomes aligned to the magnetic field as expected. Total time: $30s$.

Figure 5 Fig 5: Lifetime of a skyrmion in $VZr_3C_3II$, from its creation to annihilation. The graph denotes the average energy per atom. Note how the entire field is now blue (as opposed to red as in Fig 4), this is because unlike the simulation in Fig 4, there is no external magnetic field applied, this means that the ground state would either be all spins up(red) or all spins down(blue) with a $50%$ probability for either. Total time: $9$ hrs.

All these results and more are explored more in the attached JOSS Paper.md file, which forms the cover for the project.

Tools

If one does not wish to use this purely for observation of microstructures in material, they can alternatively make use of its other modes like

1. Vector analysis/visualization

To run a vector analysis, simple execute python test_main.py <config file name> (main config file is /configs/test_config.json)

After execution, an output folder will be created with name <prefix>_<material>_<date>_<time> with .npy files in them containing spin data in 1D array form as [s1x s1y s1z s2x s2y.........sny snz] and an additional metadata.json file containing the details of the run.

For casual viewing, it is advised to use the in-built viewer as python test_view.py <folder name> to provide visuals of the spin magnitudes in 3 directions as well as a top-down vector map of the spins.

2. Critical temperature analysis

To run a Critical temperature analysis, execute python tc_sims.py after configuring the appropriate config file in /configs/tc_config.json.

After the run, a graph of temperature vs magnetization and temperature vs susceptibility will be auto generated. If the graph is not desired, please comment out everything from line 21.

Contributions

Contributions to this project are always welcome and greatly appreciated. To find out how you can contribute to this project, please read our contribution guidelines

Code of Conduct

To read our code of conduct, please visit CUDA-METRO Code of Conduct.

About

Based on the pyCUDA enabled Metropolis MCMC for spintronics simulations.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published