Description of the problem
The test_grdlandmask_no_outgrid function's benchmark was added in #2911, but it seems like the performance has been deviating wildly (>10%) in many PRs since (e.g. #2937 (comment), #2938 (comment)), even though there has been no evident change in the code of grdlandmask or any of the underlying clib functions.
See tracked performance at https://codspeed.io/GenericMappingTools/pygmt/benchmarks/pygmt/tests/test_grdlandmask.py::test_grdlandmask_no_outgrid
Sample flame graph from https://codspeed.io/GenericMappingTools/pygmt/branches/windows-multiprocessing:

Opening this issue to discuss why the variance might be so high, and if there are ways to mitigate this.
Minimal Complete Verifiable Example
make test PYTEST_EXTRA="-r P --pyargs pygmt -k test_grdlandmask_no_outgrid --codspeed"
Full error message
No response
System information
PyGMT information:
version: v0.10.1.dev187+g322f8889
System information:
python: 3.12.1 | packaged by conda-forge | (main, Dec 23 2023, 08:03:24) [GCC 12.3.0]
executable: /usr/share/miniconda/bin/python
machine: Linux-6.2.0-1018-azure-x86_64-with-glibc2.35
Dependency information:
numpy: 1.26.2
pandas: 2.1.4
xarray: 2023.12.0
netCDF4: 1.6.5
packaging: 23.2
contextily: None
geopandas: 0.14.1
ipython: None
rioxarray: None
ghostscript: 10.02.1
GMT library information:
binary version: 6.4.0
cores: 4
grid layout: rows
image layout:
library path: /usr/share/miniconda/lib/libgmt.so
padding: 2
plugin dir: /usr/share/miniconda/lib/gmt/plugins
share dir: /usr/share/miniconda/share/gmt
version: 6.4.0
Description of the problem
The
test_grdlandmask_no_outgridfunction's benchmark was added in #2911, but it seems like the performance has been deviating wildly (>10%) in many PRs since (e.g. #2937 (comment), #2938 (comment)), even though there has been no evident change in the code ofgrdlandmaskor any of the underlying clib functions.See tracked performance at https://codspeed.io/GenericMappingTools/pygmt/benchmarks/pygmt/tests/test_grdlandmask.py::test_grdlandmask_no_outgrid
Sample flame graph from https://codspeed.io/GenericMappingTools/pygmt/branches/windows-multiprocessing:
Opening this issue to discuss why the variance might be so high, and if there are ways to mitigate this.
Minimal Complete Verifiable Example
Full error message
No response
System information
PyGMT information: version: v0.10.1.dev187+g322f8889 System information: python: 3.12.1 | packaged by conda-forge | (main, Dec 23 2023, 08:03:24) [GCC 12.3.0] executable: /usr/share/miniconda/bin/python machine: Linux-6.2.0-1018-azure-x86_64-with-glibc2.35 Dependency information: numpy: 1.26.2 pandas: 2.1.4 xarray: 2023.12.0 netCDF4: 1.6.5 packaging: 23.2 contextily: None geopandas: 0.14.1 ipython: None rioxarray: None ghostscript: 10.02.1 GMT library information: binary version: 6.4.0 cores: 4 grid layout: rows image layout: library path: /usr/share/miniconda/lib/libgmt.so padding: 2 plugin dir: /usr/share/miniconda/lib/gmt/plugins share dir: /usr/share/miniconda/share/gmt version: 6.4.0