llg3d.benchmarks.efficiency¶
Compare the efficiency of different solver for different DOMAIN_SIZES.
Four modes are available:
run: run the benchmark and save results to a CSV file
report: print a results table from a CSV file
plot: plot the results from a CSV file
compare: plot efficiency comparison from various CSV files
$ llg3d.bench.efficiency --help
usage: llg3d.bench.efficiency [-h] {run,report,plot,compare} ...
Compare the efficiency of different solver for different :attr:`DOMAIN_SIZES`.
Four modes are available: - `run`: run the benchmark and save results to a CSV
file - `report`: print a results table from a CSV file - `plot`: plot the
results from a CSV file - `compare`: plot efficiency comparison from various
CSV files .. command-output:: llg3d.bench.efficiency --help
positional arguments:
{run,report,plot,compare}
run Run the benchmark and save the CSV file
report Print a results table from a CSV file
plot Plot the graph from the CSV file
compare Plot efficiency comparison from various CSV files
options:
-h, --help show this help message and exit
Module Attributes
Jx values to benchmark |
Functions
|
Compare efficiency from multiple CSV files by plotting them together. |
Get a legend string with CPU and GPU names. |
|
Get the CPU name of the current machine. |
|
Get the name of the OpenCL device to use for the benchmark. |
|
Get the number of iterations for the benchmark based on domain size. |
|
|
Load benchmark results from a CSV file. |
|
Main function to run the benchmark, plot, or report results. |
|
Plot the benchmark results. |
|
Format the benchmark results as a table. |
|
Run the benchmark for different solvers and domain sizes. |
|
Save the benchmark results as a CSV file. |
- DOMAIN_SIZES: tuple[int, ...] = (128, 256, 512, 1024, 2048, 4096)¶
Jx values to benchmark
- get_num_iterations(Jx)[source]¶
Get the number of iterations for the benchmark based on domain size.
- Parameters:
Jx (int)
- Return type:
int
- get_gpu_name()[source]¶
Get the name of the OpenCL device to use for the benchmark.
- Return type:
str
- run_benchmark(nproc, precision='single', repeats=1)[source]¶
Run the benchmark for different solvers and domain sizes.
- Parameters:
nproc (int) – Number of MPI processes to use for the MPI solver.
precision (str) – Precision of the simulation (“single” or “double”).
repeats (int) – Number of times to repeat each measurement. If >1, function returns both means and std-devs per solver/domain.
- Returns:
- two mappings from display-name (e.g. ‘NumPy (1 CPU core)’)
to lists of mean and std values per domain size. If repeats == 1, stds is None.
- Return type:
(means, stds)
- save_as_csv(results, filepath, stds=None, legend=None)[source]¶
Save the benchmark results as a CSV file.
- Parameters:
results (dict[str, list[float]]) – The benchmark results.
filepath (Path) – The output CSV filepath.
stds (dict[str, list[float]] | None) – Optional standard deviations for the results.
legend (str | None) – An optional legend string to include as metadata.
- Return type:
None
- results_as_table(results, stds=None, legend='')[source]¶
Format the benchmark results as a table.
Add an acceleration column showing speedup relative to the NumPy solver.
- Parameters:
results (dict[str, list[float]]) – The benchmark results.
stds (dict[str, list[float]] | None) – Optional standard deviations for the results.
legend (str | None) – An optional legend string to include above the table.
- Returns:
A string representing the results in table format.
- Return type:
str
- plot(results, show=False, legend=None, errorbars=False, stds=None)[source]¶
Plot the benchmark results.
- Parameters:
results (dict[str, list[float]]) – The benchmark results.
show (bool) – Whether to display the plot interactively .
legend (str | None) – An optional legend string to include in the plot title.
errorbars (bool) – Whether to plot error bars using stds if provided.
stds (dict[str, list[float]] | None) – Optional standard deviations for the results.
- Return type:
None
- load_csv(filepath)[source]¶
Load benchmark results from a CSV file.
Returns results keyed by display-name (same format as run_benchmark output).
- Parameters:
filepath (Path) – The CSV filepath to load.
- Returns:
A tuple (results, stds, metadata) where results is the benchmark results, stds are the standard deviations if present (else None), and metadata is any metadata extracted from the CSV file.
- Return type:
tuple[dict[str, list[float]], dict[str, list[float]] | None, dict]
- compare_csv_files(filepaths, solver, show=False)[source]¶
Compare efficiency from multiple CSV files by plotting them together.
- Parameters:
filepaths (list[Path]) – List of CSV filepaths to compare.
solver (str) – The solver to compare (“numpy”, “mpi”, “opencl”).
show (bool) – Whether to display the plot interactively.
- Return type:
None