diff options
author | Harmen Stoppels <harmenstoppels@gmail.com> | 2021-03-30 21:03:50 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2021-03-30 12:03:50 -0700 |
commit | 1db6cd5d16a084bffbc319467d70942d0307cc9f (patch) | |
tree | 01b65aca7df8c6644e291aad80ea82d4f560ceac /share | |
parent | d3a9824ea2f7675e9e0008b5d914f02e63e19d85 (diff) | |
download | spack-1db6cd5d16a084bffbc319467d70942d0307cc9f.tar.gz spack-1db6cd5d16a084bffbc319467d70942d0307cc9f.tar.bz2 spack-1db6cd5d16a084bffbc319467d70942d0307cc9f.tar.xz spack-1db6cd5d16a084bffbc319467d70942d0307cc9f.zip |
Make -j flag less exceptional (#22360)
* Make -j flag less exceptional
The -j flag in spack behaves differently from make, ctest, ninja, etc,
because it caps the number of jobs to an arbitrary number 16.
Spack will behave like other tools if `spack install` uses a reasonable
default, and `spack install -j <num>` *overrides* that default.
This will be particularly useful for Spack usage outside of a traditional
HPC context and for HPC centers that encourage users to compile on
login nodes with many cores instead of on compute nodes, which has
become increasingly common as individual nodes have more cores.
This maintains the existing default value of min(num_cpus, 16). However,
as it is right now, Spack does a poor job at determining the number of
cpus on linux, since it doesn't take cgroups into account. This is
particularly problematic when using distributed builds with slurm. This PR
also introduces `spack.util.cpus.cpus_available()` to consolidate
knowledge on determining the number of available cores, and improves
core detection for linux. This should also improve core detection for Docker/
Kubernetes, which also use cgroups.
Diffstat (limited to 'share')
0 files changed, 0 insertions, 0 deletions