Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
* ci: version bump for ghcr.io/spack/e4s-amazonlinux-2
This new image comes with GnuPG v2.4.0
* py-cython: upperbounds for Python versions
* fix py-gevent nonsense
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
|
|
* CI configuration boilerplate reduction and refactor
Configuration:
- New notation for list concatenation (prepend/append)
- New notation for string concatenation (prepend/append)
- Break out configuration files for: ci.yaml, cdash.yaml, view.yaml
- Spack CI section refactored to improve self-consistency and
composability
- Scripts are now lists of lists and/or lists of strings
- Job attributes are now listed under precedence ordered list that are
composed/merged using Spack config merge rules.
- "service-jobs" are identified explicitly rather than as a batch
CI:
- Consolidate common, platform, and architecture configurations for all CI stacks into composable configuration files
- Make padding consistent across all stacks (256)
- Merge all package -> runner mappings to be consistent across all
stacks
Unit Test:
- Refactor CI module unit-tests for refactor configuration
Docs:
- Add docs for new notations in configuration.rst
- Rewrite docs on CI pipelines to be consistent with refactored CI
workflow
* Script verbose environ, dev bootstrap
* Port #35409
|
|
By setting the traversal depth to 1, only specs matching the changed
package and direct dependents of those (and of course all dependencies
of that set) are removed from pruning candidacy.
|
|
ref. #35400
|
|
* Update target names for Gitlab pipelines
* Remove handling of deprecated names for graviton
|
|
* Style: black 23, skip magic trailing commas
* isort should use same line length as black
* Fix unused import
* Update version of black used in CI
* Update new packages
* Update new packages
|
|
|
|
* gpu test stack: test cuda@12 builds on A100 w/ newer driver
* get gpu info via nvidia-smi;
* kokkos+cuda^cuda@12 has genuine failure
|
|
* ci: add minimal gpu testing stack
* kokkos +cuda requires +wrapper...
* require pass
* add raja+cuda
|
|
|
|
|
|
* e4s: restore builds builds
* gitlab ci: allow UO to build protected binaries for signing
* use newer image; comment out failing builds
* gitlab-ci: Some tweaks for e4s power builds
- fix tags (no longer require generate jobs to run on aws)
- fix resource requests for generation jobs resource requests
- remove SPACK_SIGNING_KEY from protected power build jobs
- update UO signing key path
- change the CDash build group to reflect stack name
- retry pipeline generation jobs *always*
* correct double packages: section
* gitlab-ci:script: modernize
* remove new gnu make, not for ppc64le
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
|
|
|
|
* Add --exclude option to 'spack external find' to ignore user-specified external packages
* Update bash completion arg order for external find
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
* Allocate more memory for generate jobs in all stacks
* Add a 60 minute timeout on generate jobs
|
|
(#35022)
|
|
Since SPACK_PACKAGE_IDS is now also "namespaced" with <prefix>, it makes
more sense to call the flag `--make-prefix` and alias the old flag
`--make-target-prefix` to it.
|
|
|
|
|
|
|
|
* license bump year
* fix black issues of modified files
* mypy
* fix 2021 -> 2023
|
|
|
|
With the new variable [prefix/]SPACK_PACKAGE_IDS you can conveniently execute
things after each successful install.
For example push just-built packages to a buildcache
```
SPACK ?= spack
export SPACK_COLOR = always
MAKEFLAGS += -Orecurse
MY_BUILDCACHE := $(CURDIR)/cache
.PHONY: all clean
all: push
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
# the relevant part: push has *all* example/push/<pkg identifier> as prereqs
push: $(addprefix example/push/,$(example/SPACK_PACKAGE_IDS))
$(SPACK) -e . buildcache update-index --directory $(MY_BUILDCACHE)
$(info Pushed everything, yay!)
# and each example/push/<pkg identifier> has the install target as prereq,
# and the body can use target local $(HASH) and $(SPEC) variables to do
# things, such as pushing to a build cache
example/push/%: example/install/%
@mkdir -p $(dir $@)
$(SPACK) -e . buildcache create --allow-root --only=package --unsigned --directory $(MY_BUILDCACHE) /$(HASH) # push $(SPEC)
@touch $@
spack.lock: spack.yaml
$(SPACK) -e . concretize -f
env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-target-prefix example
clean:
rm -rf spack.lock env.mk example/
``
|
|
With this change we get the invariant that `mirror.fetch_url` and
`mirror.push_url` return valid URLs, even when the backing config
file is actually using (relative) paths with potentially `$spack` and
`$env` like variables.
Secondly it avoids expanding mirror path / URLs too early,
so if I say `spack mirror add name ./path`, it stays `./path` in my
config. When it's retrieved through MirrorCollection() we
exand it to say `file://<env dir>/path` if `./path` was set in an
environment scope.
Thirdly, the interface is simplified for the relevant buildcache
commands, so it's more like `git push`:
```
spack buildcache create [mirror] [specs...]
```
`mirror` is either a mirror name, a path, or a URL.
Resolving the relevant mirror goes as follows:
- If it contains either / or \ it is used as an anonymous mirror with
path or url.
- Otherwise, it's interpreted as a named mirror, which must exist.
This helps to guard against typos, e.g. typing `my-mirror` when there
is no such named mirror now errors with:
```
$ spack -e . buildcache create my-mirror
==> Error: no mirror named "my-mirror". Did you mean ./my-mirror?
```
instead of creating a directory in the current working directory. I
think this is reasonable, as the alternative (requiring that a local dir
exists) feels a bit pendantic in the general case -- spack is happy to
create the build cache dir when needed, saving a `mkdir`.
The old (now deprecated) format will still be available in Spack 0.20,
but is scheduled to be removed in 0.21:
```
spack buildcache create (--directory | --mirror-url | --mirror-name) [specs...]
```
This PR also touches `tmp_scope` in tests, because it didn't really
work for me, since spack fixes the possible --scope values once and
for all across tests, so tests failed when run out of order.
|
|
|
|
|
|
Sometimes I just want to know how many packages of a certain type there are.
- [x] add `--count` option to `spack list` that output the number of packages that
*would* be listed.
```console
> spack list --count
6864
> spack list --count py-
2040
> spack list --count r-
1162
```
|
|
* paraview: add `rocm` variant
This conflicts with CUDA and requires at least ParaView 5.11.0. More
dependencies are also needed.
* E4S: Add ParaView for ROCm and CUDA stacks
* DAV SDK: Update ParaView version and GPU variants
* Verify using hipcc vs amdclang++ for newer hip
Co-authored-by: Ben Boeckel <ben.boeckel@kitware.com>
|
|
|
|
|
|
Gitlab does not merge lists when a job extends two other definitions
that include the same list (e.g. tags). Also, it merges dictionaries
as long as the keys are distinct, but just takes the last mentioned
value when there are key collisions.
This change makes sure that when different tags are needed by a
pipeline, the ones we want are actually provided. It also changes
the example stack to better follow this pattern so we do not lead
developers astray in the future.
|
|
|
|
|
|
|
|
`spack graph` has been reworked to use:
- Jinja templates
- builder objects to construct the template context when DOT graphs are requested.
This allowed to add a new colored output for DOT graphs that highlights both
the dependency types and the nodes that are needed at runtime for a given spec.
|
|
* ML CI: Linux x86_64
* Update comments
* Rename again
* Rename comments
* Update to match other arches
* No compiler
* Compiler was wrong anyway
* Faster TF
|
|
|
|
|
|
The main issue that's fixed is that Spack passes paths (as strings) to
functions that require urls. That wasn't an issue on unix, since there
you can simply concatenate `file://` and `path` and all is good, but on
Windows that gives invalid file urls. Also on Unix, Spack would not deal with uri encoding like x%20y for file paths.
It also removes Spack's custom url.parse function, which had its own incorrect interpretation of file urls, taking file://x/y to mean the relative path x/y instead of hostname=x and path=/y. Also it automatically interpolated variables, which is surprising for a function that parses URLs.
Instead of all sorts of ad-hoc `if windows: fix_broken_file_url` this PR
adds two helper functions around Python's own path2url and reverse.
Also fixes a bug where some `spack buildcache` commands
used `-d` as a flag to mean `--mirror-url` requiring a URL, and others
`--directory`, requiring a path. It is now the latter consistently.
|
|
|
|
It's very common for us to tell users to grep through the existing Spack packages to
find examples of what they want, and it's also very common for package developers to do
it. Now, searching packages is even easier.
`spack pkg grep` runs grep on all `package.py` files in repos known to Spack. It has no
special options other than the search string; all options passed to it are forwarded
along to `grep`.
```console
> spack pkg grep --help
usage: spack pkg grep [--help] ...
positional arguments:
grep_args arguments for grep
options:
--help show this help message and exit
```
```console
> spack pkg grep CMakePackage | head -3
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/3dtk/package.py:class _3dtk(CMakePackage):
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/abseil-cpp/package.py:class AbseilCpp(CMakePackage):
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/accfft/package.py:class Accfft(CMakePackage, CudaPackage):
```
```console
> spack pkg grep -Eho '(\S*)\(PythonPackage\)' | head -3
AwsParallelcluster(PythonPackage)
Awscli(PythonPackage)
Bueno(PythonPackage)
```
## Return Value
This retains the return value semantics of `grep`:
* 0 for found,
* 1 for not found
* >1 for error
## Choosing a `grep`
You can set the ``SPACK_GREP`` environment variable to choose the ``grep``
executable this command should use.
|
|
Unit tests on Windows are supposed to pass for any PR to pass CI.
However, the return code for the unit test command was not being
checked, which meant this check was always passing (effectively
disabled). This PR
* Properly checks the result of the unit tests and fails if the
unit tests fail
* Fixes (or disables on Windows) a number of tests which have
"drifted" out of support on Windows since this check was
effectively disabled
|
|
At some point the `a` mock package became an `AutotoolsPackage`, and that means it
depends on `gnuconfig` on macOS. This was causing one of our shell tests to fail on
macOS because it was testing for `{a.prefix.bin}:{b.prefix.bin}` in `PATH`, but
`gnuconfig` shows up between them.
- [x] simplify the test to check `spack load --sh a` and `spack load --sh b` separately
|
|
This commit reworks the bootstrapping procedure to use Spack environments
as much as possible.
The `spack.bootstrap` module has also been reorganized into a Python package.
A distinction is made among "core" Spack dependencies (clingo, GnuPG, patchelf)
and other dependencies. For a number of reasons, explained in the `spack.bootstrap.core`
module docstring, "core" dependencies are bootstrapped with the current ad-hoc
method.
All the other dependencies are instead bootstrapped using a Spack environment
that lives in a directory specific to the interpreter and the architecture being used.
|
|
* py-alphafold: update to 2.2.4, update dependencies
* style
|
|
* patch command: add concretizer args
* tab completion
|
|
|