Age | Commit message (Collapse) | Author | Files | Lines |
|
fixes #33785
|
|
|
|
This reverts commit 4d28a6466188ea9fe3b55a4c3d7690dd66e0dc8f.
|
|
Currently, external `PythonPackage`s cause install failures because the logic in `PythonPackage` assumes that it can ask for `spec["python"]`. Because we chop off externals' dependencies, an external Python extension may not have a `python` dependency.
This PR resolves the issue by guaranteeing that a `python` node is present in one of two ways:
1. If there is already a `python` node in the DAG, we wire the external up to it.
2. If there is no existing `python` node, we wire up a synthetic external `python` node, and we assume that it has the same prefix as the external.
The assumption in (2) isn't always valid, but it's better than leaving the user with a non-working `PythonPackage`.
The logic here is specific to `python`, but other types of extensions could take advantage of it. Packages need only define `update_external_dependencies(self)`, and this method will be called on externals after concretization. This likely needs to be fleshed out in the future so that any added nodes are included in concretization, but for now we only bolt on dependencies post-concretization.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
This was overlooked when we added binary patchelf buildcaches
|
|
Spack currently creates a temporary sbang that is moved "atomically" in place,
but this temporary causes races when multiple processes start installing sbang.
Let's just stick to an idempotent approach. Notice that we only re-install sbang
if Spack updates it (since we do file compare), and sbang was only touched
18 times in the past 6 years, whereas we hit the sbang tempfile issue
frequently with parallel install on a fresh spack instance in CI.
Also fixes a bug where permissions weren't updated if config changed but
the latest version of the sbang file was already installed.
|
|
|
|
* Fix https://github.com/spack/spack/issues/31640
Some packages in the binary cache fail checksum validation. Instead of having to
go back and manually install all failed packages with `--no-cache` option,
requeue those failed packages for installation from source
```shell
$ spack install py-pip
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spec.json.sig
gpg: Signature made Wed 20 Jul 2022 12:13:43 PM UTC using RSA key ID 3DB0C723
gpg: Good signature from "Spack Project Official Binaries <maintainers@spack.io>"
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spack
==> Extracting py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7 from binary cache
==> Error: Failed to install py-pip due to NoChecksumException: Requeue for manual installation.
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Using cached archive: /shared/spack/var/spack/cache/_source-cache/archive/de/deaf32dcd9ab821e359cd8330786bcd077604b5c5730c0b096eda46f95c24a2d
==> No patches needed for py-pip
==> py-pip: Executing phase: 'install'
==> py-pip: Successfully installed py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
Fetch: 0.01s. Build: 2.81s. Total: 2.82s.
[+] /shared/spack/opt/spack/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
```
* Cleanup style
* better wording
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Update lib/spack/spack/installer.py
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* changes quotes for style checks
* Update lib/spack/spack/installer.py
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Addressing @kwryankrattiger comment to use local 'use_cache`
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
|
|
|
|
|
|
The `intel` compiler at versions > 20 is provided by the `intel-oneapi-compilers-classic`
package (a thin wrapper around the `intel-oneapi-compilers` package), and the `oneapi`
compiler is provided by the `intel-oneapi-compilers` package.
Prior to this work, neither of these compilers could be bootstrapped by Spack as part of
an install with `install_missing_compilers: True`.
Changes made to make these two packages bootstrappable:
1. The `intel-oneapi-compilers-classic` package includes a bin directory and symlinks
to the compiler executables, not just logical pointers in Spack.
2. Spack can look for bootstrapped compilers in directories other than `$prefix/bin`,
defined on a per-package basis
3. `intel-oneapi-compilers` specifies a non-default search directory for the
compiler executables.
4. The `spack.compilers` module now can make more advanced associations between
packages and compilers, not just simple name translations
5. Spack support for lmod hierarchies accounts for differences between package
names and the associated compiler names for `intel-oneapi-compilers/oneapi`,
`intel-oneapi-compilers-classic/intel@20:`, `llvm+clang/clang`, and
`llvm-amdgpu/rocmcc`.
- [x] full end-to-end testing
- [x] add unit tests
|
|
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).
In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:
* `spack uninstall --dependents --remove r` in e1: removes R from e1
(but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
"missing" in both environments) and removes R from e1 (note that e1
would still install R as a dependency of P, but it would no longer
be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R
Individual unit tests were created for each of these scenarios.
|
|
All files/dirs containing ".spack" anywhere their name were ignored when
generating a spack view.
For example, this happened with the 'r' package.
|
|
* reorder packages.yaml: requirements first, then preferences
* expand preferences vs reuse vs requirements
|
|
Somehow a network error when cloning the repo for ci gets
categorized by gitlab as a script failure. To make sure we retry
jobs that failed for that reason or a similar one, include
"script_failure" as one of the reasons for retrying service jobs
(which include "no specs to rebuild" jobs, update buildcache
index jobs, and temp storage cleanup jobs.
|
|
Add a `project` block to the toml config along with development and CI
dependencies and a minimal `build-system` block, doing basically
nothing, so that spack can be bootstrapped to a full development
environment with:
```shell
$ hatch -e dev shell
```
or for a minimal environment without hatch:
```shell
$ python3 -m venv venv
$ source venv/bin/activate
$ python3 -m pip install --upgrade pip
$ python3 -m pip install -e '.[dev]'
```
This means we can re-use the requirements list throughout the workflow
yaml files and otherwise maintain this list in *one place* rather than
several disparate ones. We may be stuck with a couple more temporarily
to continue supporting python2.7, but aside from that it's less places
to get out of sync and a couple new bootstrap options.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
This change uses the aws cli, if available, to retrieve spec files
from the mirror to a local temp directory, then parallelizes the
reading of those files from disk using multiprocessing.ThreadPool.
If the aws cli is not available, then a ThreadPool is used to fetch
and read the spec files from the mirror.
Using aws cli results in ~16 times speed up to recreate the binary
mirror index, while just parallelizing the fetching and reading
results in ~3 speed up.
|
|
|
|
in env (#32228)
The compiler bootstrapping logic currently does not add a task when the compiler package is already in the install task queue. This causes failures when the compiler package is added without the additional metadata telling the task to update the compilers list.
Solution: requeue compilers for bootstrapping when needed, to update `task.compiler` metadata.
|
|
Currently, develop specs that are not roots and are not explicitly listed dependencies
of the roots are not applied.
- [x] ensure dev specs are applied.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
|
|
Without the `lsb-release` tool installed, Spack cannot identify the
Ubuntu/Debian version.
|
|
`spack env create` enables a view by default (in a weird hidden
directory, but well...). This is asking for trouble with the other
default of `concretizer:unify:false`, since having different flavors of
the same spec in an environment, leads to collision errors when
generating the view.
A change of defaults would improve user experience:
However, `unify:true` makes most sense, since any time the issue is
brought up in Slack, the user changes the concretization config, since
it wasn't the intention to have different flavors of the same spec, and
install times are decreased.
Further we improve the docs and drop the duplicate root spec limitation
|
|
* Update archspec version
* Add a translation table from old names
|
|
Dependencies specified by hash are unique in Spack in that the abstract
specs are created with internal structure. In this case, the constraint
generation for spec matrices fails due to flattening the structure.
It turns out that the dep_difference method for Spec.constrain does not
need to operate on transitive deps to ensure correctness. Removing transitive
deps from this method resolves the bug.
- [x] Includes regression test
|
|
* extract virtual dependencies from reusable specs
* bugfix to avoid establishing new node for virtual
|
|
virtuals are not (#18269)
* respect spec buildable that overrides virtual buildable
|
|
|
|
|
|
Without this, Meson will use its Wraps to automatically download and
install dependencies. We want to manage dependencies explicitly,
therefore disable this functionality.
|
|
Currently, Spack can fail for a valid spec if the spec is constructed from overlapping, but not conflicting, concrete specs via the hash.
For example, if abcdef and ghijkl are the hashes of specs that both depend on zlib/mnopqr, then foo ^/abcdef ^/ghijkl will fail to construct a spec, with the error message "Cannot depend on zlib... twice".
This PR changes this behavior to check whether the specs are compatible before failing.
With this PR, foo ^/abcdef ^/ghijkl will concretize.
As a side-effect, so will foo ^zlib ^zlib and other specs that are redundant on their dependencies.
|
|
Co-authored-by: becker33 <becker33@users.noreply.github.com>
|
|
Argparse started raising ArgumentError exceptions
when the same parser is added twice. Therefore, we
perform the addition only if the parser is not there
already
Port match syntax to our unparser
|
|
Compilers and linker optimize string constants for space by aliasing
them when one is a suffix of another. For gcc / binutils this happens
already at -O1, due to -fmerge-constants. This means that we have
to take care during relocation to always preserve a certain length
of the suffix of those prefixes that are C-strings.
In this commit we pick length 7 as a safe suffix length, assuming the
suffix is typically the 7 characters from the hash (i.e. random), so
it's unlikely to alias with any string constant used in the sources.
In general we now pad shortened strings from the left with leading
dir seperators, but in the case of C-strings that are much shorter
and don't share a common suffix (due to projections), we do allow
shrinking the C-string, appending a null, and retaining the old part
of the prefix.
Also when rewiring, we ensure that the new hash preserves the last
7 bytes of the old hash.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
|
|
A user may want to set some attributes on a package without actually modifying the package (e.g. if they want to git pull updates to the package without conflicts). This PR adds a per-package configuration section called "set", which is a dictionary of attribute names to desired values. For example:
packages:
openblas:
package_attributes:
submodules: true
git: "https://github.com/myfork/openblas"
in this case, the package will always retrieve git submodules, and will use an alternate location for the git repo.
While git, url, and submodules are the attributes for which we envision the most usage, this allows any attribute to be overridden, and the acceptable values are any value parseable from yaml.
|
|
|
|
|
|
* demote warning for no source id to debug message
|
|
Newer versions of the CrayPE for EX systems have standalone compiler executables for CCE and compiler wrappers for Cray MPICH. With those, we can treat the cray systems as part of the linux platform rather than having a separate cray platform.
This PR:
- [x] Changes cray platform detection to ignore EX systems with Craype version 21.10 or later
- [x] Changes the cce compiler to be detectable via paths
- [x] Changes the spack compiler wrapper to understand the executable names for the standalone cce compiler (`craycc`, `crayCC`, `crayftn`).
|
|
|
|
|
|
|
|
|
|
|
|
Whenever the rpath string actually _grows_, it falls back to patchelf,
when it stays the same length or gets shorter, we update it in-place,
padded with null bytes.
This PR only deals with absolute -> absolute rpath replacement. We don't
use `_build_tarball(relative=True)` in our CI. If `relative` then it falls
back to the old replacement code.
With this PR, relocation time goes down significantly, likely because patchelf
does some odd things with mmap, causing lots of overhead. Example:
- `binutils`: 700MB installed, goes from `1.91s` to `0.57s`, or `3.4x` faster.
Relocation time: 27% -> 10% of total install time
- `llvm`: 6.8GB installed, goes from `28.56s` to `5.38`, or `5.3x` faster.
Relocation time: 44% -> 13% of total install time
The bottleneck is now decompression.
Note: I'm somewhat confused about the "relative rpath" code paths. Right
now this PR only deals with absolute -> absolute replacement. As far as
I understand, if you embrace relative rpaths when uploading to the
buildcache, the whole point is you _don't_ want to patch rpaths on
install? So it seems fine to not expand `$ORIGIN` again imho.
|
|
When a package asks for non-parallel make, we need to force `make -j1` because just doing `make` will run in parallel under jobserver (e.g. `spack env depfile`).
We now always add `-j1` when asked for a non-parallel execution (even if there is no jobserver).
And each `MakeExecutable` can now ask for jobserver support or not. For example: the default `ninja` does not support jobserver so spack applies the default `-j`, but `ninja@kitware` or `ninja-fortran` does, so spack doesn't add `-j`.
Tips: you can run `SPACK_INSTALL_FLAGS=-j1 make -f spack-env-depfile.make -j8` to avoid massive job-spawning because of build tools that don't support jobserver (ninja).
|
|
We try to avoid non-default variant values in the concretizer, but this doesn't make
sense for variants forced to take some non-default value by variant propagation.
Counting this as a penalty effectively biases the concretizer for small specs dependency
graphs -- something we try very hard to avoid elsewhere because it can lead to very
strange decisions.
Example: with the penalty, `spack spec hdf5` will choose the default `openmpi` as its
`mpi` provider, but `spack spec hdf5 ~~shared` will choose `mpich` because it has to set
fewer non-default variant values because `mpich`'s DAG is smaller. That's not a good
reason to prefer a non-default virtual provider.
To fix this, if the user explicitly requests a non-default value to be propagated, there
shouldn't be a penalty. Variant values set on the CLI already don't count as default; we
just need to extend that to propagated values.
|
|
Adds another post install hook that loops over the install prefix, looking for shared libraries type of ELF files, and sets the soname to their own absolute paths.
The idea being, whenever somebody links against those libraries, the linker copies the soname (which is the absolute path to the library) as a "needed" library, so that at runtime the dynamic loader realizes the needed library is a path which should be loaded directly without searching.
As a result:
1. rpaths are not used for the fixed/static list of needed libraries in the dynamic section (only for _actually_ dynamically loaded libraries through `dlopen`), which largely solves the issue that Spack's rpaths are a heuristic (`<prefix>/lib` and `<prefix>/lib64` might not be where libraries really are...)
2. improved startup times (no library search required)
|
|
Untouched spec pruning was added to reduce the number of specs
developers see getting rebuilt in their PR pipelines that they
don't understand. Because the state of the develop mirror lags
quite far behind the tip of the develop branch, PRs often find
they need to rebuild things untouched by their PR.
Untouched spec pruning was previously implemented by finding all
specs in the environment with names of packages touched by the PR,
traversing in both directions the DAGS of those specs, and adding
all dependencies as well as dependents to a list of concrete specs
that should not be considered for pruning.
We found that this heuristic results in too many pruned specs, and
that dependents of touched specs must have all their dependencies
added to the list of specs that should not be considered for pruning.
|
|
|