summaryrefslogtreecommitdiff
path: root/share
AgeCommit message (Collapse)AuthorFilesLines
2021-11-02Add tag filters to `spack test list` (#26842)Tamara Dahlgren1-1/+6
2021-11-01feature: add "spack tags" command (#26136)Tamara Dahlgren1-2/+11
This PR adds a "spack tags" command to output package tags or (available) packages with those tags. It also ensures each package is listed in the tag cache ONLY ONCE per tag.
2021-11-01ci: Enable more packages in the DVSDK CI pipeline (#27025)Chuck Atkins1-1/+10
* ci: Enable more packages in the DVSDK CI pipeline * doxygen: Add conflicts for gcc bugs * dray: Add version constraints for api breakage with newer deps
2021-10-29pipelines: llvm kills the xlarge, use huge (#27079)Scott Wittenburg2-3/+10
2021-10-29Fix exit codes posix shell wrapper (#27012)Harmen Stoppels2-6/+21
* Correct exit code in sh wrapper * Fix tests * SC2069
2021-10-29Fix exit codes in fish (#27028)Harmen Stoppels2-13/+29
2021-10-28bugfix: config edit should work with a malformed `spack.yaml`Todd Gamblin2-0/+75
If you don't format `spack.yaml` correctly, `spack config edit` still fails and you have to edit your `spack.yaml` manually. - [x] Add some code to `_main()` to defer `ConfigFormatError` when loading the environment, until we know what command is being run. - [x] Make `spack config edit` use `SPACK_ENV` instead of the config scope object to find `spack.yaml`, so it can work even if the environment is bad. Co-authored-by: scheibelp <scheibel1@llnl.gov>
2021-10-28Deactivate previous env before activating new one (#25409)Harmen Stoppels3-7/+85
* Deactivate previous env before activating new one Currently on develop you can run `spack env activate` multiple times to switch between environments, but they leave traces, even though Spack only supports one active environment at a time. Currently: ```console $ spack env create a $ spack env create b $ spack env activate -p a [a] $ spack env activate -p b [b] [a] $ spack env activate -p b [a] [b] [a] $ spack env activate -p a [a] [b] [c] $ echo $MANPATH | tr ":" "\n" /path/to/environments/a/.spack-env/view/share/man /path/to/environments/a/.spack-env/view/man /path/to/environments/b/.spack-env/view/share/man /path/to/environments/b/.spack-env/view/man ``` This PR fixes that: ```console $ spack env activate -p a [a] $ spack env activate -p b [b] $ spack env activate -p a [a] $ echo $MANPATH | tr ":" "\n" /path/to/environments/a/.spack-env/view/share/man /path/to/environments/a/.spack-env/view/man ```
2021-10-28spack setup-env.sh: make zsh loading async compatible, and ~10x faster (in ↵Tom Scogland3-10/+23
some cases) (#26120) Currently spack is a bit of a bad actor as a zsh plugin, and it was my fault. The autoload and compinit should really be handled by the user, as was made abundantly clear when I found spack was doing completion initialization for *all* of my plugins due to a deferred setup that was getting messed up by it. Making this conditional took spack load time from 1.5 seconds (with module loading disabled) to 0.029 seconds. I can actually afford to load spack by default with this change in. Hopefully someday we'll do proper zsh completion support, but for now this helps a lot. * use zsh hist expansion in place of dirname * only run (bash)compinit if compdef/complete missing * add zsh compiled files to .gitignore * move changes to .in file, because spack
2021-10-27Remove documentation tests from GitHub Actions (#26981)Massimiliano Culpo1-26/+0
We moved documentation tests to readthedocs since a while, so remove the one on GitHub.
2021-10-25containerize: pin the Spack version used in a container (#21910)Massimiliano Culpo11-4/+181
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442: ```yaml container: images: os: amazonlinux:2 spack: ref: develop resolve_sha: true ``` The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`. Modifications: - [x] Permit to pin the version of Spack, either by branch or tag or sha - [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1) - [x] Permit to print the bootstrap image as a standalone - [x] Add documentation on the new part of the schema - [x] Add unit tests for different use cases
2021-10-22E4S amd64 CI: add parsec (#26906)eugeneswalker1-0/+1
2021-10-21update E4S CI environments in preparation for 21.11 release (#26826)eugeneswalker2-283/+34
* update E4S CI environments in preparation for 21.11 release * e4s ci env: use clingo
2021-10-20Add --preferred and --latest to`spack checksum` (#25830)Tamara Dahlgren1-1/+1
2021-10-19Relax os constraints in e4s pipelines (#26547)Massimiliano Culpo3-3/+3
2021-10-19Gitlab pipelines: use images from the Spack organization (#26796)Massimiliano Culpo6-16/+13
2021-10-11Add `spack env activate --temp` (#25388)Harmen Stoppels3-3/+15
Creates an environment in a temporary directory and activates it, which is useful for a quick ephemeral environment: ``` $ spack env activate -p --temp [spack-1a203lyg] $ spack add zlib ==> Adding zlib to environment /tmp/spack-1a203lyg ==> Updating view at /tmp/spack-1a203lyg/.spack-env/view ```
2021-10-11Remove unused --dependencies flag (#25731)Harmen Stoppels1-1/+1
2021-10-01Retry pipeline generation jobs in certain casesScott Wittenburg1-0/+5
2021-10-01Add DAG scheduling to child pipelinesScott Wittenburg1-0/+30
2021-10-01Service jobs do not need an active environmentScott Wittenburg5-10/+0
2021-10-01Activate concrete env dir in all service jobsScott Wittenburg5-10/+6
2021-10-01Use same image to build e4s as to generate itScott Wittenburg1-3/+3
2021-10-01Use same image to build dvsdk as to generate itScott Wittenburg1-2/+2
2021-10-01Use default runner image for radiussScott Wittenburg1-6/+2
2021-09-30Build container images on Github Actions and push to multiple registries ↵Massimiliano Culpo5-28/+23
(#26247) Modifications: - Modify the workflow to build container images without pushing when the workflow file itself is modified - Strip the leading ghcr.io/spack/ from env.container env.versioned to prepare pushing to multiple registries - Fixed CentOS 7 and Amazon Linux builds - Login and push to Docker Hub as well as Github Action - Add a badge to README.md with the status of docker images
2021-09-24Pipelines: Disable ppc builds until we have resources or make it smaller ↵Scott Wittenburg1-24/+24
(#26238)
2021-09-24Remove centos:6 image references (#26095)Harmen Stoppels1-74/+0
This was EOL November 30th, 2020. I believe the "builds" are failing on develop because of it.
2021-09-14Make clingo the default solver (#25502)Massimiliano Culpo1-1/+1
Modifications: - [x] Change `defaults/config.yaml` - [x] Add a fix for bootstrapping patchelf from sources if `compilers.yaml` is empty - [x] Make `SPACK_TEST_SOLVER=clingo` the default for unit-tests - [x] Fix package failures in the e4s pipeline Caveats: 1. CentOS 6 still uses the original concretizer as it can't connect to the buildcache due to issues with `ssl` (bootstrapping from sources requires a C++14 capable compiler) 1. I had to update the image tag for GitlabCI in e699f14. 1. libtool v2.4.2 has been deprecated and other packages received some update
2021-09-14Pipelines: (Re)enable E4S on Power stack (#25921)Scott Wittenburg2-21/+21
Pipelines: (Re)enable E4S on Power stack
2021-09-13ci: Add ecp-data-vis-sdk CI pipeline (#22179)Chuck Atkins2-0/+92
* ci: Add a minimal subset of the ECP Data & Vis SDK CI pipeline * ci: Expand the ECP Data & Vis SDK pipeline with more variants
2021-09-10Pipelines: build kokkos-kernels on bigger instance (#25845)Scott Wittenburg1-0/+1
2021-09-09specs: move to new spec.json format with build provenance (#22845)Nathan Hanford1-8/+8
This is a major rework of Spack's core core `spec.yaml` metadata format. It moves from `spec.yaml` to `spec.json` for speed, and it changes the format in several ways. Specifically: 1. The spec format now has a `_meta` section with a version (now set to version `2`). This will simplify major changes like this one in the future. 2. The node list in spec dictionaries is no longer keyed by name. Instead, it is a list of records with no required key. The name, hash, etc. are fields in the dictionary records like any other. 3. Dependencies can be keyed by any hash (`hash`, `full_hash`, `build_hash`). 4. `build_spec` provenance from #20262 is included in the spec format. This means that, for spliced specs, we preserve the *full* provenance of how to build, and we can reproduce a spliced spec from the original builds that produced it. **NOTE**: Because we have switched the spec format, this PR changes Spack's hashing algorithm. This means that after this commit, Spack will think a lot of things need rebuilds. There are two major benefits this PR provides: * The switch to JSON format speeds up Spack significantly, as Python's builtin JSON implementation is orders of magnitude faster than YAML. * The new Spec format will soon allow us to represent DAGs with potentially multiple versions of the same dependency -- e.g., for build dependencies or for compilers-as-dependencies. This PR lays the necessary groundwork for those features. The old `spec.yaml` format continues to be supported, but is now considered a legacy format, and Spack will opportunistically convert these to the new `spec.json` format.
2021-09-08url stats: add `--show-issues` option (#25792)Todd Gamblin1-1/+1
* tests: make `spack url [stats|summary]` work on mock packages Mock packages have historically had mock hashes, but this means they're also invalid as far as Spack's hash detection is concerned. - [x] convert all hashes in mock package to md5 or sha256 - [x] ensure that all mock packages have a URL - [x] ignore some special cases with multiple VCS fetchers * url stats: add `--show-issues` option `spack url stats` tells us how many URLs are using what protocol, type of checksum, etc., but it previously did not tell us which packages and URLs had the issues. This adds a `--show-issues` option to show URLs with insecure (`http`) URLs or `md5` hashes (which are now deprecated by NIST).
2021-09-06Update pinned OpenSSL version to 1.1.1l (#25787)Scott Wittenburg2-2/+2
Update to the latest version of openssl, as the previous one (1.1.1k) is now deprecated, so spack can no longer rebuild it from source.
2021-09-02start of work to add spack audit packages-https checker (#25670)Vanessasaurus1-1/+10
This PR will add a new audit, specifically for spack package homepage urls (and eventually other kinds I suspect) to see if there is an http address that can be changed to https. Usage is as follows: ```bash $ spack audit packages-https <package> ``` And in list view: ```bash $ spack audit list generic: Generic checks relying on global variables configs: Sanity checks on compilers.yaml Sanity checks on packages.yaml packages: Sanity checks on specs used in directives packages-https: Sanity checks on https checks of package urls, etc. ``` I think it would be unwise to include with packages, because when run for all, since we do requests it takes a long time. I also like the idea of more well scoped checks - likely there will be other addresses for http/https within a package that we eventually check. For now, there are two error cases - one is when an https url is tried but there is some SSL error (or other error that means we cannot update to https): ```bash $ spack audit packages-https zoltan PKG-HTTPS-DIRECTIVES: 1 issue found 1. Error with attempting https for "zoltan": <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'www.cs.sandia.gov'. (_ssl.c:1125)> ``` This is either not fixable, or could be fixed with a change to the url or (better) contacting the site owners to ask about some certificate or similar. The second case is when there is an http that needs to be https, which is a huge issue now, but hopefully not after this spack PR. ```bash $ spack audit packages-https xman Package "xman" uses http but has a valid https endpoint. ``` And then when a package is fixed: ```bash $ spack audit packages-https zlib PKG-HTTPS-DIRECTIVES: 0 issues found. ``` And that's mostly it. :) Signed-off-by: vsoch <vsoch@users.noreply.github.com> Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-09-01Update versions for RAJA, CHAI, Umpire and camp (#25528)David Beckingsale2-4/+4
2021-08-30Pipelines: disable power builds (#25704)Scott Wittenburg1-18/+18
2021-08-27Fix fish test "framework" (#25242)Harmen Stoppels1-34/+16
Remove broken test, see #21699
2021-08-24fixing small bug that a line of spack monitor commands are still produced ↵Vanessasaurus1-1/+4
(#25366) Signed-off-by: vsoch <vsoch@users.noreply.github.com> Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-08-19Use kcov from official Ubuntu 20.04 repository (#25385)Massimiliano Culpo3-5/+8
* Ubuntu 20.04 provides kcov, so don't build from source * Use two undocumented options for kcov v3.8
2021-08-19buildcache: Add environment-aware buildcache sync command (#25470)Scott Wittenburg1-1/+5
2021-08-18Bootstrap clingo from binaries (#22720)Massimiliano Culpo2-2/+281
* Bootstrap clingo from binaries * Move information on clingo binaries to a JSON file * Add support to bootstrap on Cray Bootstrapping on Cray requires, at the moment, to swap the platform when looking for binaries - due to #22800. * Add SHA256 verification for bootstrapped software Use sha256 verification for binaries necessary to bootstrap the concretizer and gpg for signature verification * patchelf: use Spec._old_concretize() to bootstrap As noted in #24450 we may happen to need the concretizer when bootstrapping clingo. In that case only the old concretizer is available. * Add a schema for bootstrapping methods Two fields have been added to bootstrap.yaml: "sources" which lists the methods available for bootstrapping software "trusted" which records if a source is trusted or not A subcommand has been added to "spack bootstrap" to list the sources currently available. * Methods used for bootstrapping are configurable from bootstrap:sources The function that tries to ensure a given Python module is importable now tries bootstrapping methods in the same order as they are defined in `bootstrap.yaml` * Permit to trust/untrust bootstrapping methods * Add binary tests for MacOS, Ubuntu * Add documentation * Add a note on bash
2021-08-16e4s ci: further expand power stack (#25405)eugeneswalker1-23/+22
2021-08-16Second pass at increasing RADIUSS cloud CI packages (#25321)Tamara Dahlgren1-7/+7
2021-08-09ci pipelines: expand the list of RADIUSS packages (#25282)Tamara Dahlgren1-10/+10
2021-08-06Add New Build Containers Workflow (#24257)Alec Scott1-1/+1
This pull request adds a new workflow to build and deploy Spack Docker containers from GitHub Actions. In comparison with our current system where we use Dockerhub's CI to build our Docker containers, this workflow will allow us to now build for multiple architectures and deploy to multiple registries. (At the moment x86_64 and Arm64 because ppc64le is throwing an error within archspec.) As currently set up, the PR will build all of the current containers (minus Centos6 because those yum repositories are no longer available?) as both x86_64 and Arm64 variants. The workflow is currently setup to build and deploy containers nightly from develop as well as on tagged releases. The workflow will also build, but NOT deploy containers on a pull request for the purposes of testing this PR. At the moment it is setup to deploy the built containers to GitHub's Container Registry although, support for also uploading to Dockerhub/Quay can be included easily if we decide to keep releasing on Dockerhub/want to begin releasing on Quay.
2021-08-03e4s ci stack: update package preferences (#25163)eugeneswalker1-4/+17
2021-08-02ci: Add RADIUSS stack to cloud CI (#23922)Tamara Dahlgren2-2/+131
Add RADIUSS software stack to gitlab PR testing pipelines
2021-07-30CI: capture stdout/stderr output to artifact files (#24401)Scott Wittenburg1-1/+2
Gitlab truncates job trace output (even the complete raw output) at 4MB, so this change captures it to a file under "user_data" artifacts as well, to make sure we can debug output from the end of the rebuild job.