Age | Commit message (Collapse) | Author | Files | Lines |
|
This change adds all documented AMDGPU processors from GFX9 through GFX11 and sorts the list.
|
|
|
|
|
|
|
|
* Remove CI jobs related to Python 2.7
* Remove Python 2.7 specific code from Spack core
* Remove externals for Python 2 only
* Remove llnl.util.compat
|
|
|
|
We added a hotfix to releases/v0.19 with a feature flag, but the flag
is incompatible with the config schema on `develop`.
- [x] Ensure schema is compatible on develop even though config option is unused.
|
|
* Speed-up bootstrap mirror unit test
The unit test doesn't need to concretize, since it checks
only metadata for the mirror.
* architecture.py: use "default_mock_concretization" for slow test
|
|
|
|
(#33860)
* Use spack.spack_version_info as source of truth
Co-authored-by: Todd Gamblin <gamblin2@llnl.gov>
|
|
|
|
|
|
|
|
Environments and environment views have taken over the role of `spack activate/deactivate`, and we should deprecate these commands for several reasons:
- Global activation is a really poor idea:
- Install prefixes should be immutable; since they can have multiple, unrelated dependents; see below
- Added complexity elsewhere: verification of installations, tarballs for build caches, creation of environment views of packages with unrelated extensions "globally activated"... by removing the feature, it gets easier for people to contribute, and we'd end up with fewer bugs due to edge cases.
- Environment accomplish the same thing for non-global "activation" i.e. `spack view`, but better.
Also we write in the docs:
```
However, Spack global activations have two potential drawbacks:
#. Activated packages that involve compiled C extensions may still
need their dependencies to be loaded manually. For example,
``spack load openblas`` might be required to make ``py-numpy``
work.
#. Global activations "break" a core feature of Spack, which is that
multiple versions of a package can co-exist side-by-side. For example,
suppose you wish to run a Python package in two different
environments but the same basic Python --- one with
``py-numpy@1.7`` and one with ``py-numpy@1.8``. Spack extensions
will not support this potential debugging use case.
```
Now that environments are established and views can take over the role of activation
non-destructively, we can remove global activation/deactivation.
|
|
* remove deprecated top-level module config per deprecation in 0.18
|
|
|
|
* Use bfs order in spack find --deps tree
* fix printing tests
|
|
fixes #33785
|
|
|
|
This reverts commit 4d28a6466188ea9fe3b55a4c3d7690dd66e0dc8f.
|
|
Currently, external `PythonPackage`s cause install failures because the logic in `PythonPackage` assumes that it can ask for `spec["python"]`. Because we chop off externals' dependencies, an external Python extension may not have a `python` dependency.
This PR resolves the issue by guaranteeing that a `python` node is present in one of two ways:
1. If there is already a `python` node in the DAG, we wire the external up to it.
2. If there is no existing `python` node, we wire up a synthetic external `python` node, and we assume that it has the same prefix as the external.
The assumption in (2) isn't always valid, but it's better than leaving the user with a non-working `PythonPackage`.
The logic here is specific to `python`, but other types of extensions could take advantage of it. Packages need only define `update_external_dependencies(self)`, and this method will be called on externals after concretization. This likely needs to be fleshed out in the future so that any added nodes are included in concretization, but for now we only bolt on dependencies post-concretization.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
This was overlooked when we added binary patchelf buildcaches
|
|
Spack currently creates a temporary sbang that is moved "atomically" in place,
but this temporary causes races when multiple processes start installing sbang.
Let's just stick to an idempotent approach. Notice that we only re-install sbang
if Spack updates it (since we do file compare), and sbang was only touched
18 times in the past 6 years, whereas we hit the sbang tempfile issue
frequently with parallel install on a fresh spack instance in CI.
Also fixes a bug where permissions weren't updated if config changed but
the latest version of the sbang file was already installed.
|
|
|
|
* Fix https://github.com/spack/spack/issues/31640
Some packages in the binary cache fail checksum validation. Instead of having to
go back and manually install all failed packages with `--no-cache` option,
requeue those failed packages for installation from source
```shell
$ spack install py-pip
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spec.json.sig
gpg: Signature made Wed 20 Jul 2022 12:13:43 PM UTC using RSA key ID 3DB0C723
gpg: Good signature from "Spack Project Official Binaries <maintainers@spack.io>"
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spack
==> Extracting py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7 from binary cache
==> Error: Failed to install py-pip due to NoChecksumException: Requeue for manual installation.
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Using cached archive: /shared/spack/var/spack/cache/_source-cache/archive/de/deaf32dcd9ab821e359cd8330786bcd077604b5c5730c0b096eda46f95c24a2d
==> No patches needed for py-pip
==> py-pip: Executing phase: 'install'
==> py-pip: Successfully installed py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
Fetch: 0.01s. Build: 2.81s. Total: 2.82s.
[+] /shared/spack/opt/spack/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
```
* Cleanup style
* better wording
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Update lib/spack/spack/installer.py
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* changes quotes for style checks
* Update lib/spack/spack/installer.py
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Addressing @kwryankrattiger comment to use local 'use_cache`
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
|
|
|
|
|
|
The `intel` compiler at versions > 20 is provided by the `intel-oneapi-compilers-classic`
package (a thin wrapper around the `intel-oneapi-compilers` package), and the `oneapi`
compiler is provided by the `intel-oneapi-compilers` package.
Prior to this work, neither of these compilers could be bootstrapped by Spack as part of
an install with `install_missing_compilers: True`.
Changes made to make these two packages bootstrappable:
1. The `intel-oneapi-compilers-classic` package includes a bin directory and symlinks
to the compiler executables, not just logical pointers in Spack.
2. Spack can look for bootstrapped compilers in directories other than `$prefix/bin`,
defined on a per-package basis
3. `intel-oneapi-compilers` specifies a non-default search directory for the
compiler executables.
4. The `spack.compilers` module now can make more advanced associations between
packages and compilers, not just simple name translations
5. Spack support for lmod hierarchies accounts for differences between package
names and the associated compiler names for `intel-oneapi-compilers/oneapi`,
`intel-oneapi-compilers-classic/intel@20:`, `llvm+clang/clang`, and
`llvm-amdgpu/rocmcc`.
- [x] full end-to-end testing
- [x] add unit tests
|
|
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).
In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:
* `spack uninstall --dependents --remove r` in e1: removes R from e1
(but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
"missing" in both environments) and removes R from e1 (note that e1
would still install R as a dependency of P, but it would no longer
be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R
Individual unit tests were created for each of these scenarios.
|
|
All files/dirs containing ".spack" anywhere their name were ignored when
generating a spack view.
For example, this happened with the 'r' package.
|
|
* reorder packages.yaml: requirements first, then preferences
* expand preferences vs reuse vs requirements
|
|
Somehow a network error when cloning the repo for ci gets
categorized by gitlab as a script failure. To make sure we retry
jobs that failed for that reason or a similar one, include
"script_failure" as one of the reasons for retrying service jobs
(which include "no specs to rebuild" jobs, update buildcache
index jobs, and temp storage cleanup jobs.
|
|
Add a `project` block to the toml config along with development and CI
dependencies and a minimal `build-system` block, doing basically
nothing, so that spack can be bootstrapped to a full development
environment with:
```shell
$ hatch -e dev shell
```
or for a minimal environment without hatch:
```shell
$ python3 -m venv venv
$ source venv/bin/activate
$ python3 -m pip install --upgrade pip
$ python3 -m pip install -e '.[dev]'
```
This means we can re-use the requirements list throughout the workflow
yaml files and otherwise maintain this list in *one place* rather than
several disparate ones. We may be stuck with a couple more temporarily
to continue supporting python2.7, but aside from that it's less places
to get out of sync and a couple new bootstrap options.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
This change uses the aws cli, if available, to retrieve spec files
from the mirror to a local temp directory, then parallelizes the
reading of those files from disk using multiprocessing.ThreadPool.
If the aws cli is not available, then a ThreadPool is used to fetch
and read the spec files from the mirror.
Using aws cli results in ~16 times speed up to recreate the binary
mirror index, while just parallelizing the fetching and reading
results in ~3 speed up.
|
|
|
|
in env (#32228)
The compiler bootstrapping logic currently does not add a task when the compiler package is already in the install task queue. This causes failures when the compiler package is added without the additional metadata telling the task to update the compilers list.
Solution: requeue compilers for bootstrapping when needed, to update `task.compiler` metadata.
|
|
Currently, develop specs that are not roots and are not explicitly listed dependencies
of the roots are not applied.
- [x] ensure dev specs are applied.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
|
|
Without the `lsb-release` tool installed, Spack cannot identify the
Ubuntu/Debian version.
|
|
`spack env create` enables a view by default (in a weird hidden
directory, but well...). This is asking for trouble with the other
default of `concretizer:unify:false`, since having different flavors of
the same spec in an environment, leads to collision errors when
generating the view.
A change of defaults would improve user experience:
However, `unify:true` makes most sense, since any time the issue is
brought up in Slack, the user changes the concretization config, since
it wasn't the intention to have different flavors of the same spec, and
install times are decreased.
Further we improve the docs and drop the duplicate root spec limitation
|
|
* Update archspec version
* Add a translation table from old names
|
|
Dependencies specified by hash are unique in Spack in that the abstract
specs are created with internal structure. In this case, the constraint
generation for spec matrices fails due to flattening the structure.
It turns out that the dep_difference method for Spec.constrain does not
need to operate on transitive deps to ensure correctness. Removing transitive
deps from this method resolves the bug.
- [x] Includes regression test
|
|
* extract virtual dependencies from reusable specs
* bugfix to avoid establishing new node for virtual
|
|
virtuals are not (#18269)
* respect spec buildable that overrides virtual buildable
|
|
|
|
|
|
Without this, Meson will use its Wraps to automatically download and
install dependencies. We want to manage dependencies explicitly,
therefore disable this functionality.
|
|
Currently, Spack can fail for a valid spec if the spec is constructed from overlapping, but not conflicting, concrete specs via the hash.
For example, if abcdef and ghijkl are the hashes of specs that both depend on zlib/mnopqr, then foo ^/abcdef ^/ghijkl will fail to construct a spec, with the error message "Cannot depend on zlib... twice".
This PR changes this behavior to check whether the specs are compatible before failing.
With this PR, foo ^/abcdef ^/ghijkl will concretize.
As a side-effect, so will foo ^zlib ^zlib and other specs that are redundant on their dependencies.
|
|
Co-authored-by: becker33 <becker33@users.noreply.github.com>
|
|
Argparse started raising ArgumentError exceptions
when the same parser is added twice. Therefore, we
perform the addition only if the parser is not there
already
Port match syntax to our unparser
|