Age | Commit message (Collapse) | Author | Files | Lines |
|
py-protobuf 2.x (#32977)
* Add checksum for py-protobuf 4.21.7, protobuf 21.7
* Update var/spack/repos/builtin/packages/protobuf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update var/spack/repos/builtin/packages/protobuf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/protobuf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update package.py
* Delete protoc2.5.0_aarch64.patch
* Update package.py
* Restore but deprecate py-protobuf 3.0.0a/b; deprecate py-tensorflow 0.x
* Fix audit
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
* Patch fmt for hipcc/dpcpp
* Add msimberg as fmt maintainer
|
|
|
|
Changelog at https://gitlab.cern.ch/hepmc/HepMC3/-/tags/3.2.5
Maintainer: @vvolkl
|
|
Spack currently creates a temporary sbang that is moved "atomically" in place,
but this temporary causes races when multiple processes start installing sbang.
Let's just stick to an idempotent approach. Notice that we only re-install sbang
if Spack updates it (since we do file compare), and sbang was only touched
18 times in the past 6 years, whereas we hit the sbang tempfile issue
frequently with parallel install on a fresh spack instance in CI.
Also fixes a bug where permissions weren't updated if config changed but
the latest version of the sbang file was already installed.
|
|
|
|
|
|
|
|
* Fix https://github.com/spack/spack/issues/31640
Some packages in the binary cache fail checksum validation. Instead of having to
go back and manually install all failed packages with `--no-cache` option,
requeue those failed packages for installation from source
```shell
$ spack install py-pip
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spec.json.sig
gpg: Signature made Wed 20 Jul 2022 12:13:43 PM UTC using RSA key ID 3DB0C723
gpg: Good signature from "Spack Project Official Binaries <maintainers@spack.io>"
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spack
==> Extracting py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7 from binary cache
==> Error: Failed to install py-pip due to NoChecksumException: Requeue for manual installation.
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Using cached archive: /shared/spack/var/spack/cache/_source-cache/archive/de/deaf32dcd9ab821e359cd8330786bcd077604b5c5730c0b096eda46f95c24a2d
==> No patches needed for py-pip
==> py-pip: Executing phase: 'install'
==> py-pip: Successfully installed py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
Fetch: 0.01s. Build: 2.81s. Total: 2.82s.
[+] /shared/spack/opt/spack/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
```
* Cleanup style
* better wording
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Update lib/spack/spack/installer.py
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* changes quotes for style checks
* Update lib/spack/spack/installer.py
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Addressing @kwryankrattiger comment to use local 'use_cache`
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
|
|
|
|
|
|
|
|
fixes #33747
|
|
Use at most 32 jobs when available.
|
|
|
|
The `intel` compiler at versions > 20 is provided by the `intel-oneapi-compilers-classic`
package (a thin wrapper around the `intel-oneapi-compilers` package), and the `oneapi`
compiler is provided by the `intel-oneapi-compilers` package.
Prior to this work, neither of these compilers could be bootstrapped by Spack as part of
an install with `install_missing_compilers: True`.
Changes made to make these two packages bootstrappable:
1. The `intel-oneapi-compilers-classic` package includes a bin directory and symlinks
to the compiler executables, not just logical pointers in Spack.
2. Spack can look for bootstrapped compilers in directories other than `$prefix/bin`,
defined on a per-package basis
3. `intel-oneapi-compilers` specifies a non-default search directory for the
compiler executables.
4. The `spack.compilers` module now can make more advanced associations between
packages and compilers, not just simple name translations
5. Spack support for lmod hierarchies accounts for differences between package
names and the associated compiler names for `intel-oneapi-compilers/oneapi`,
`intel-oneapi-compilers-classic/intel@20:`, `llvm+clang/clang`, and
`llvm-amdgpu/rocmcc`.
- [x] full end-to-end testing
- [x] add unit tests
|
|
Co-authored-by: Thomas Jahns <jahns@dkrz.de>
Co-authored-by: Thomas Jahns <jahns@dkrz.de>
|
|
* PyTorch: add v1.13.0
* py-torchaudio: add v0.13.0
* py-torchaudio: add all versions
* py-torchvision: jpeg required for all backends
* py-torchtext: add v0.14.0
* py-torchtext: fix build
* py-torchaudio: fix build
* py-torchtext: update version tag
* Use Spack-built sox
* Explicitly disable sox build
* https -> http
|
|
* Add zstd support for elfutils
Not defining `+zstd` implies `--without-zstd` flag to configure.
This avoids automatic library detection and thus make the build only
depends on Spack installed dependencies.
* Use autotools helper "with_or_without"
* Revert use of with_or_without
Using `with_or_without()` with `variant` keyword does not seem to work.
|
|
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).
In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:
* `spack uninstall --dependents --remove r` in e1: removes R from e1
(but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
"missing" in both environments) and removes R from e1 (note that e1
would still install R as a dependency of P, but it would no longer
be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R
Individual unit tests were created for each of these scenarios.
|
|
All files/dirs containing ".spack" anywhere their name were ignored when
generating a spack view.
For example, this happened with the 'r' package.
|
|
|
|
* reorder packages.yaml: requirements first, then preferences
* expand preferences vs reuse vs requirements
|
|
Somehow a network error when cloning the repo for ci gets
categorized by gitlab as a script failure. To make sure we retry
jobs that failed for that reason or a similar one, include
"script_failure" as one of the reasons for retrying service jobs
(which include "no specs to rebuild" jobs, update buildcache
index jobs, and temp storage cleanup jobs.
|
|
Add a `project` block to the toml config along with development and CI
dependencies and a minimal `build-system` block, doing basically
nothing, so that spack can be bootstrapped to a full development
environment with:
```shell
$ hatch -e dev shell
```
or for a minimal environment without hatch:
```shell
$ python3 -m venv venv
$ source venv/bin/activate
$ python3 -m pip install --upgrade pip
$ python3 -m pip install -e '.[dev]'
```
This means we can re-use the requirements list throughout the workflow
yaml files and otherwise maintain this list in *one place* rather than
several disparate ones. We may be stuck with a couple more temporarily
to continue supporting python2.7, but aside from that it's less places
to get out of sync and a couple new bootstrap options.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2.2.0 to 2.2.1.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/c74574e6c82eeedc46366be1b0d287eff9085eb6...8c0edbc76e98fa90f69d9a2c020dcb50019dc325)
---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
|
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.1.0 to 3.1.1.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/3cea5372237819ed00197afe530f5a7ea3e805c8...83fd05a356d7e2593de66fc9913b3002723633cb)
---
updated-dependencies:
- dependency-name: actions/upload-artifact
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
|
This change uses the aws cli, if available, to retrieve spec files
from the mirror to a local temp directory, then parallelizes the
reading of those files from disk using multiprocessing.ThreadPool.
If the aws cli is not available, then a ThreadPool is used to fetch
and read the spec files from the mirror.
Using aws cli results in ~16 times speed up to recreate the binary
mirror index, while just parallelizing the fetching and reading
results in ~3 speed up.
|
|
|
|
|
|
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3.1.1 to 3.2.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/c84f38281176d4c9cdb1626ffafcd6b3911b5d94...c56af957549030174b10d6867f20e78cfd7debc5)
---
updated-dependencies:
- dependency-name: docker/build-push-action
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
|
Bumps [dorny/paths-filter](https://github.com/dorny/paths-filter) from 2.10.2 to 2.11.1.
- [Release notes](https://github.com/dorny/paths-filter/releases)
- [Changelog](https://github.com/dorny/paths-filter/blob/master/CHANGELOG.md)
- [Commits](https://github.com/dorny/paths-filter/compare/b2feaf19c27470162a626bd6fa8438ae5b263721...4512585405083f25c027a35db413c2b3b9006d50)
---
updated-dependencies:
- dependency-name: dorny/paths-filter
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
|
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/8b122486cedac8393e77aa9734c3528886e4a1a8...e81a89b1732b9c48d79cd809d8d81d79c4647a18)
---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
|
in env (#32228)
The compiler bootstrapping logic currently does not add a task when the compiler package is already in the install task queue. This causes failures when the compiler package is added without the additional metadata telling the task to update the compilers list.
Solution: requeue compilers for bootstrapping when needed, to update `task.compiler` metadata.
|
|
Currently, develop specs that are not roots and are not explicitly listed dependencies
of the roots are not applied.
- [x] ensure dev specs are applied.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
|
|
|
|
|
|
|
|
|
|
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
|
|
Without the `lsb-release` tool installed, Spack cannot identify the
Ubuntu/Debian version.
|
|
|
|
|
|
Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
|
|
`spack env create` enables a view by default (in a weird hidden
directory, but well...). This is asking for trouble with the other
default of `concretizer:unify:false`, since having different flavors of
the same spec in an environment, leads to collision errors when
generating the view.
A change of defaults would improve user experience:
However, `unify:true` makes most sense, since any time the issue is
brought up in Slack, the user changes the concretization config, since
it wasn't the intention to have different flavors of the same spec, and
install times are decreased.
Further we improve the docs and drop the duplicate root spec limitation
|
|
* Update archspec version
* Add a translation table from old names
|
|
|
|
Dependencies specified by hash are unique in Spack in that the abstract
specs are created with internal structure. In this case, the constraint
generation for spec matrices fails due to flattening the structure.
It turns out that the dep_difference method for Spec.constrain does not
need to operate on transitive deps to ensure correctness. Removing transitive
deps from this method resolves the bug.
- [x] Includes regression test
|
|
* extract virtual dependencies from reusable specs
* bugfix to avoid establishing new node for virtual
|