Execute

Command Line Help

$ llg3d -h
usage: llg3d [-h] [-element ELEMENT] [-N N] [-dt DT] [-Jx JX] [-Jy JY]
             [-Jz JZ] [-dx DX] [-T [T ...]] [-H_ext H_EXT]
             [-n_average N_AVERAGE] [-n_integral N_INTEGRAL]
             [-n_profile N_PROFILE] [-b]

Solver for the stochastic Landau-Lifshitz-Gilbert equation in 3D

options:
  -h, --help            show this help message and exit
  -element ELEMENT      Chemical element of the sample (default: Cobalt)
  -N N                  Number of time iterations (default: 5000)
  -dt DT                Time step (default: 1e-14)
  -Jx JX                Number of points in x (default: 300)
  -Jy JY                Number of points in y (default: 21)
  -Jz JZ                Number of points in z (default: 21)
  -dx DX                Step in x (default: 1e-09)
  -T [T ...]            Temperature (default: 0.0)
  -H_ext H_EXT          External field (default: 0.0)
  -n_average N_AVERAGE  Start index of time average (default: 4000)
  -n_integral N_INTEGRAL
                        Spatial average frequency (number of iterations)
                        (default: 1)
  -n_profile N_PROFILE  x-profile save frequency (number of iterations)
                        (default: 0)
  -b, --blocking        Use blocking communications (default: False)

Example

Sequential Execution

$ llg3d -N 100
element    : Cobalt
N          = 100
dt         = 1e-14
Jx         = 300
Jy         = 21
Jz         = 21
dx         = 1e-09
T          = 0.0
H_ext      = 0.0
n_average  = 4000
n_integral = 1
n_profile  = 0
blocking   = False

	x		y		z
J =	300		21		21
L =	2.99e-07		2e-08		2e-08
d =	1.00000000e-09	1.00000000e-09	1.00000000e-09

dV   = 1.00000000e-27
V    = 1.19600000e-22
ntot = 132300

CFL = 0.07542857142857143
Iteration: [........................................] 0/100
Iteration: [........................................] 1/100
Iteration: [██......................................] 6/100
Iteration: [████....................................] 11/100
Iteration: [██████..................................] 16/100
Iteration: [████████................................] 21/100
Iteration: [██████████..............................] 26/100
Iteration: [████████████............................] 31/100
Iteration: [██████████████..........................] 36/100
Iteration: [████████████████........................] 41/100
Iteration: [██████████████████......................] 46/100
Iteration: [████████████████████....................] 51/100
Iteration: [██████████████████████..................] 56/100
Iteration: [████████████████████████................] 61/100
Iteration: [██████████████████████████..............] 66/100
Iteration: [████████████████████████████............] 71/100
Iteration: [██████████████████████████████..........] 76/100
Iteration: [████████████████████████████████........] 81/100
Iteration: [██████████████████████████████████......] 86/100
Iteration: [████████████████████████████████████....] 91/100
Iteration: [██████████████████████████████████████..] 96/100
Iteration: [████████████████████████████████████████] 100/100

Integral of m1 in m1_integral_space_T0_300x21x21_np1.txt
N iterations      = 100
total_time [s]    = 3.212
time/ite [s/iter] = 3.212e-02
Summary in run.json

The execution produces the file run.json which contains the parameters and a link to the result file:

$ cat run.json
{
    "params": {
        "element": "Cobalt",
        "N": 100,
        "dt": 1e-14,
        "Jx": 300,
        "Jy": 21,
        "Jz": 21,
        "dx": 1e-09,
        "T": 0.0,
        "H_ext": 0.0,
        "n_average": 4000,
        "n_integral": 1,
        "n_profile": 0,
        "blocking": false,
        "np": 1
    },
    "results": {
        "0.0": {
            "total_time": 3.2124082054942846,
            "integral_file": "m1_integral_space_T0_300x21x21_np1.txt"
        }
    }
}

Parallel Execution

$ mpirun -np 6 llg3d -N 100 ; rm -f *.npy *.json *.txt

Note

If the number of MPI processes np is not a divisor of Jx, the execution is interrupted.

Parallel execution with SLURM on a computing cluster

Install llg3d on the Cluster

# Create a working directory:
mkdir work
cd work
# Clone the llg3d sources:
git clone git@gitlab.math.unistra.fr:llg3d/llg3d.git
# Create and activate a Python virtual environment:
virtualenv .venv
source .venv/bin/activate
# Install llg3d (in editable mode):
pip install -e llg3d
# Create a run directory:
mkdir run
cd run

Create a sbatch File

Copy the sbatch_jobarrays.slurm file into the run/ directory:

cp ../llg3d/utils/sbatch_jobarrays.slurm .  # from the run/ directory

Its content is as follows:

#!/bin/bash

#SBATCH -p public            # targeting the public partition
#SBATCH --ntasks-per-core=1  # disabling multithreading
#SBATCH -n 40                # reserving 40 compute cores
#SBATCH -J g40               # naming the job
#SBATCH --array=0-12         # creating a SLURM job array of 13 sub-jobs

# Array of temperatures
TEMPERATURES=(1000 1100 1200 1300 1350 1400 1425 1450 1500 1550 1650 1750 1900)

# If the number of SLURM tasks exceeds the size of TEMPERATURES, we exit
if [ $SLURM_ARRAY_TASK_COUNT -gt ${#TEMPERATURES[@]} ]
then
    echo "number of tasks > number of temperatures"
    echo "($SLURM_ARRAY_TASK_COUNT > ${#TEMPERATURES[@]})"
    exit 1
fi

# JOB TASK ID with 3 zero padding
id=$(printf %03d $SLURM_ARRAY_TASK_ID)
MAIN_JOB_PATH="job_${SLURM_ARRAY_JOB_ID}"  # main job path
JOB_PATH="${MAIN_JOB_PATH}/${id}"  # sub-job path

# Run temperature
let "temperature = ${TEMPERATURES[$SLURM_ARRAY_TASK_ID]}"

# Activating the Python virtual environment
source ../../.venv/bin/activate

# Creating a directory dedicated to the run and moving into it
mkdir -p $JOB_PATH
cd $JOB_PATH

# Launching the MPI computation and redirecting standard output to the output.log file
mpiexec -n 40 llg3d -N 4000 -Jx 6000 -dx 1e-9 -T $temperature

wait

cd ..
llg3d.post --job_dir .

Submit the Job Array

(.venv) $ sbatch sbatch_jobarrays.slurm
Submitted batch job 3672

The execution will create a SLURM job array where each sub-job corresponds to a temperature.

Monitor Job Execution

(.venv) $ squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
            3672_0    public      g40  boileau  R       0:02      1 gaya1
            3672_1    public      g40  boileau  R       0:02      1 gaya1
            3672_2    public      g40  boileau  R       0:02      1 gaya1
            3672_3    public      g40  boileau  R       0:02      1 gaya2
            3672_4    public      g40  boileau  R       0:02      1 gaya2
            3672_5    public      g40  boileau  R       0:02      1 gaya2
            3672_6    public      g40  boileau  R       0:02      1 gaya3
            3672_7    public      g40  boileau  R       0:02      1 gaya3
            3672_8    public      g40  boileau  R       0:02      1 gaya3
            3672_9    public      g40  boileau  R       0:02      1 gaya4
           3672_10    public      g40  boileau  R       0:02      1 gaya4
           3672_11    public      g40  boileau  R       0:02      1 gaya4
           3672_12    public      g40  boileau  R       0:02      1 gaya5

It can be seen that jobs [0-12] have already started (R for running).

When the jobs are finished, they leave the queue:

(.venv) $ squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)

The execution produces the following directory structure:

(.venv) $ tree
.
├── job_3672
│   ├── 000      ├── m1_integral_space_T1000_6000x21x21_np40.txt
│      └── run.json
│   ├── 001      ├── m1_integral_space_T1100_6000x21x21_np40.txt
│      └── run.json
│   ├── 002      ├── m1_integral_space_T1200_6000x21x21_np40.txt
│      └── run.json
│   ├── 003      ├── m1_integral_space_T1300_6000x21x21_np40.txt
│      └── run.json
│   ├── 004      ├── m1_integral_space_T1350_6000x21x21_np40.txt
│      └── run.json
│   ├── 005      ├── m1_integral_space_T1400_6000x21x21_np40.txt
│      └── run.json
│   ├── 006      ├── m1_integral_space_T1425_6000x21x21_np40.txt
│      └── run.json
│   ├── 007      ├── m1_integral_space_T1450_6000x21x21_np40.txt
│      └── run.json
│   ├── 008      ├── m1_integral_space_T1500_6000x21x21_np40.txt
│      └── run.json
│   ├── 009      ├── m1_integral_space_T1550_6000x21x21_np40.txt
│      └── run.json
│   ├── 010      ├── m1_integral_space_T1650_6000x21x21_np40.txt
│      └── run.json
│   ├── 011      ├── m1_integral_space_T1750_6000x21x21_np40.txt
│      └── run.json
│   └── 012       ├── m1_integral_space_T1900_6000x21x21_np40.txt
│       └── run.json
├── sbatch_jobarrays.slurm
├── slurm-3672_0.out
├── slurm-3672_10.out
├── slurm-3672_11.out
├── slurm-3672_12.out
├── slurm-3672_1.out
├── slurm-3672_2.out
├── slurm-3672_3.out
├── slurm-3672_4.out
├── slurm-3672_5.out
├── slurm-3672_6.out
├── slurm-3672_7.out
├── slurm-3672_8.out
└── slurm-3672_9.out

14 directories, 54 files

Process the Results

Use the llg3d.post command which executes the src/llg3d/process_temperature.py script:

(.venv) $ llg3d.post job_451/
T_Curie = 1631 K
Image saved in job_451/m1_mean.png

process_temperature.py gathers the results, interpolates the values, and evaluates the Curie temperature as the value where the slope of the average magnetization is minimal: \(T_{Curie} = \arg \min(\partial m_1/\partial T)\).

The plotted graph looks like this:

Graph m = f(T)