Age | Commit message (Collapse) | Author | Files | Lines |
|
* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
|
|
|
|
|
|
|
|
This pull request adds a new workflow to build and deploy Spack Docker containers
from GitHub Actions. In comparison with our current system where we use Dockerhub's
CI to build our Docker containers, this workflow will allow us to now build for multiple
architectures and deploy to multiple registries. (At the moment x86_64 and Arm64 because
ppc64le is throwing an error within archspec.)
As currently set up, the PR will build all of the current containers (minus Centos6 because
those yum repositories are no longer available?) as both x86_64 and Arm64 variants. The
workflow is currently setup to build and deploy containers nightly from develop as well as
on tagged releases. The workflow will also build, but NOT deploy containers on a pull request
for the purposes of testing this PR. At the moment it is setup to deploy the built containers to
GitHub's Container Registry although, support for also uploading to Dockerhub/Quay can be
included easily if we decide to keep releasing on Dockerhub/want to begin releasing on Quay.
|
|
|
|
Add RADIUSS software stack to gitlab PR testing pipelines
|
|
Gitlab truncates job trace output (even the complete raw output) at 4MB,
so this change captures it to a file under "user_data" artifacts as well,
to make sure we can debug output from the end of the rebuild job.
|
|
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.
Example output:
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl Version(1.0.2u)
+ openssl Version(1.1.1k)
- python Version(2.7.8)
+ python Version(3.8.11)
Currently this uses diff-like output but we will attempt to improve on this in the future.
One use case for `spack diff` is whenever a user has a disambiguate situation and cannot
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
Modifications:
- Remove the "build tests" workflow from GitHub Actions
- Setup a similar e2e test on Gitlab
In this way we'll reduce load on GitHub Actions workflows and for e2e tests will
benefit from the buildcache reuse granted by pipelines.
|
|
`spack style` previously used a Travis CI variable to figure out
what the base branch of a PR was, and this was apparently also set
on `develop`. We switched to `GITHUB_BASE_REF` to support GitHub
Actions, but it looks like this is set to `""` in pushes to develop,
so `spack style` breaks there.
This PR does two things:
- [x] Remove `GITHUB_BASE_REF` knowledge from `spack style` entirely
- [x] Handle `GITHUB_BASE_REF` in style scripts instead, and explicitly
pass the base ref if it is present, but don't otherwise.
This makes `spack style` *not* dependent on the environment and fixes
handling of the base branch in the right place.
|
|
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.
We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.
- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
|
|
This uses our bootstrapping logic to automatically install dependencies for
`spack style`. Users should no longer have to pre-install all of the tools
(`isort`, `mypy`, `black`, `flake8`). The command will do it for them.
- [x] add logic to bootstrap specs with specific version requirements in `spack style`
- [x] remove style tools from CI requirements (to ensure we test bootstrapping)
- [x] rework dependencies for `mypy` and `py-typed-ast`
- `py-typed-ast` needs to be a link dependency
- it needs to be at 1.4.1 or higher to work with python 3.9
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
|
|
* build(deps): bump codecov/codecov-action from 1 to 2.0.1
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 2.0.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v2.0.1)
---
updated-dependencies:
- dependency-name: codecov/codecov-action
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com>
* Update arguments to codecov action
* Try to delete the symbolic link to root folder
Hopefully this should get rid of the ELOOP error
* Delete also for shell tests and MacOS
* Bump to v2.0.2
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
|
* trilinos: rename basker variant
The Basker solver is part of amesos2 but is clearer without the extra
scoping.
* trilinos: automatically enable teuchos and remove variant
Basically everything in trilinos needs teuchos
* trilinos: group top-level dependencies
* trilinos: update dependencies, removing unused
- GLM, X11 are unused (x11 lacks dependency specs too)
- Python variant is more like a TPL so rearrange that
- Gtest internal package shouldn't be compiled or exported
- Add MPI4PY requirement for pytrilinos
* trilinos: remove package meta-options
- XSDK settings and "all opt packages" are not used anywhere
- all optional packages are dangerous
* trilinos: Use hwloc iff kokkos
See #19119, also the HWLOC tpl name was misspelled so this was being ignored before.
* Flake
* Fix trilinos +netcdf~mpi
* trilinos: default to disabling external dependencies
* Remove teuchos from downstream dependencies
* fixup! trilinos: Use hwloc iff kokkos
* Add netcdf requirements to packages with ^trilinos+exodus
* trilinos: disable exodus by default
* fixup! Add netcdf requirements to packages with ^trilinos+exodus
* trilinos: only enable hwloc when @13: +kokkos
* xyce: propagate trilinos dependencies more simply
* dtk: fix missing boost dependency
* trilinos: remove explicit metis dependency
* trilinos: require metis/parmetis for zoltan
Disable zoltan by default to minimize default dependencies
* trilinos: mark mesquite disabled and fix kokkos arch
* xsdk: fix trilinos to also list zoltan [with zoltan2]
* ci: remove nonexistent variant from trilinos
* trilinos: add missing boost dependency
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
|
|
|
|
* Permit to enable/disable bootstrapping and customize store location
This PR adds configuration handles to allow enabling
and disabling bootstrapping, and to customize the store
location.
* Move bootstrap related configuration into its own YAML file
* Add a bootstrap command to manage configuration
|
|
* Expat: add version 2.4.0, 2.4.1; fix CVE-2013-0340
fixes #24628
* E4S pipeline: update pinned Expat version
|
|
|
|
This change make yum usable again on CentOS 6
|
|
* fix remaining flake8 errors
* imports: sort imports everywhere in Spack
We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.
This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
|
|
|
|
This consolidates code across tools in `spack style` so that each
`run_<tool>` function can be called indirecty through a dictionary
of handlers, and os that checks like finding the executable for the
tool can be shared across commands.
- [x] rework `spack style` to use decorators to register tools
- [x] define tool order in one place in `spack style`
- [x] fix python 2/3 issues to Get `isort` checks working
- [x] make isort error regex more robust across versions
- [x] remove unused output option
- [x] change vestigial `TRAVIS_BRANCH` to `GITHUB_BASE_REF`
- [x] update completion
|
|
|
|
|
|
Spack pipelines need to take specific actions internally that depend
on whether the pipeline is being run on a PR to spack or a merge to
the develop branch. Pipelines can also run in other repositories,
which represents other possible use cases than just the two mentioned
above. This PR creates a "SPACK_PIPELINE_TYPE" gitlab variable which
is propagated to rebuild jobs, and is also used internally to determine
which pipeline-specific tasks to run.
One goal of the PR is fix an issue where rebuild jobs which failed on
develop pipelines did not properly report the broken full hash to the
"broken-specs-url".
|
|
* remove blueos check on cuda variant, fix typo
* restore necessary compiler guard
* remove axom+cuda from testing because it only partially works outside ppc systems
|
|
|
|
Add a new "spack audit" command. This command can check for issues
with configuration or with packages and is intended to help a
user debug a failed Spack build.
In some cases the reported issues are always errors but are too
costly to check for (e.g. packages that specify missing variants on
dependencies). In other cases the issues may be legitimate but
uncommon usage of Spack and we want to be sure the user intended the
behavior (e.g. duplicate compiler definitions).
Audits are grouped by theme, and for now the two themes are packages
and configuration. For example you can run all available audits
on packages with "spack audit packages". It is intended that in
the future users will be able to define their own audits.
The package audits are good candidates for running in package_sanity
(i.e. they could catch bugs in user-submitted packages before they
are merged) but that is left for a later PR.
|
|
Building magma has been failing consistently and is currently
blocking PRs from being merged. Disable that spec while we
investigate the failure and work on a fix.
|
|
This should get us most of the way there to support using monitor during a spack container build, for both Singularity and Docker. Some quick notes:
### Docker
Docker works by way of BUILDKIT and being able to specify --secret. What this means is that you can prefix a line with a mount of type secret as follows:
```bash
# Install the software, remove unnecessary deps
RUN --mount=type=secret,id=su --mount=type=secret,id=st cd /opt/spack-environment && spack env activate . && export SPACKMON_USER=$(cat /run/secrets/su) && export SPACKMON_TOKEN=$(cat /run/secrets/st) && spack install --monitor --fail-fast && spack gc -y
```
Where the id for one or more secrets corresponds to the file mounted at `/run/secrets/<name>`. So, for example, to build this container with su (spackmon user) and sv (spackmon token) defined I would export them on my host and do:
```bash
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
```
And when we add `env` to the secret definition that tells the build to look for the secret with id "st" in the environment variable `SPACKMON_TOKEN` for example.
If the user is building locally with a local spack monitor, we also need to set the `--network` to be the host, otherwise you can't connect to it (a la isolation of course.)
## Singularity
Singularity doesn't have as nice an ability to clearly specify secrets, so (hoping this eventually gets implemented) what I'm doing now is providing the user instructions to write the credentials to a file, add it to the container to source, and remove when done.
## Tags
Note that the tags PR https://github.com/spack/spack/pull/23712 will need to be merged before `--monitor-tags` will actually work because I'm checking for the attribute (that doesn't exist yet):
```bash
"tags": getattr(args, "monitor_tags", None)
```
So when that PR is merged to update the argument group, it will work here, and I can either update the PR here to not check if the attribute is there (it will be) or open another one in the case this PR is already merged.
Finally, I added a bunch of documetation for how to use monitor with containerize. I say "mostly working" because I can't do a full test run with this new version until the container base is built with the updated spack (the request to the monitor server for an env install was missing so I had to add it here).
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
|
|
|
|
this will first support uploads for spack monitor, and eventually could be
used for other kinds of spack uploads
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
|
|
* setup-env: allow users to skip module function setup
* Add documentation on SPACK_SKIP_MODULES
|
|
* Update of Flecsi Spackage
Update of flecsi spackage to reconcile differences between flecsi@1:1.9
and flecsi@2: for future support purposes
* Removing Unnecessary Conditional
Removing unused conditional. Initially the plan was to switch based on
version in `cmake_args` but this was not necessary as build system
variable names remained mostly the same and conflicts prevent the rest.
For the most part, if a variant is there it does not need to check
against what version of the code is being built.
* Updated CI To Reconcile Flecsi Changes
Updated CI to target flecsi@1.4.2 which best matches the previous
release version and reconciled change in variant name
|
|
|
|
* e4s ci: enable full e4s
* add llvm-amdgpu to list of specs needing an xlarge tagged runner
* comment out qt and qwt because of intermittent build failures
* remove +rocm specs because rocblas job consistently fails due to infrastructure
|
|
This PR allows users to `--export`, `--export-secret`, or both to export GPG keys
from Spack. The docs are updated that include a warning that this usually does not
need to be done.
This addresses an issue brought up in slack, and also represented in #14721.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the config section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
|
|
### Overview
The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment. The two key changes here which aim to improve reproducibility are:
1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts. This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally.
2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction.
To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts:
- fetches and unzips the job artifacts to a local directory
- looks for the generated pipeline yaml and parses it to find details about the job to reproduce
- attempts to provide a copy of the same version of spack used in the ci build
- if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build
#### Some highlights
One consequence of this change will be much smaller pipeline yaml files. By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml.
Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace. With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows. If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts.
There are some changes to be aware of in how pipelines should be set up after this PR:
#### Pipeline generation
Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile. This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts. If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option:
```
$ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
index 579d7b56f3..0247803a30 100644
--- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
+++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml
@@ -28,10 +28,11 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
+ --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
+ - "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "public", "medium", "x86_64"]
interruptible: true
```
Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`. This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs.
#### Rebuild jobs
Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts. When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs. The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`. The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file.
When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment. If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate. No other changes to existing custom rebuild scripts should be required as a result of this PR.
As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI. The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`. If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally:
```
To reproduce this build locally, run:
spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>]
If this project does not have public pipelines, you will need to first:
export GITLAB_PRIVATE_TOKEN=<generated_token>
... then follow the printed instructions.
```
When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you can copy-paste to run a local reproducer of the CI job.
This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details.
This PR erelies on
~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~
EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk.
- [x] #22657 to support install a single spec already present in the active, concrete environment
|
|
I would like to be able to export (and save and then load programatically)
spack blame metadata, so this commit adds a spack blame --json argument,
along with developer docs for it
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
This work will come in two phases. The first here is to allow saving of a local result
with spack monitor, and the second will add a spack monitor command so the user can
do spack monitor upload.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
This will be useful to run multiple build experiments and organize by name
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
|
|
This reverts commit cefbe48c89209dc3df654795644973b1885cdea4.
|
|
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment).
This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention.
As part of this change, the module roots configuration moved from the `config` section to inside each module configuration.
Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
TODO:
- [x] code changes to support multiple module sets
- [x] code changes to support modules relative to a view
- [x] Tests for multiple module configurations
- [x] Tests for modules relative to a view
- [x] Backwards compatibility for module roots from config section
- [x] Backwards compatibility for default module set without the name specified
- [x] Tests for backwards compatibility
|
|
In an active concretize environment, support installing one or more
cli specs only if they are already present in the environment. The
`--no-add` option is the default for root specs, but optional for
dependency specs. I.e. if you `spack install <depspec>` in an
environment, the dependency-only spec `depspec` will be added as a
root of the environment before being installed. In addition,
`spack install --no-add <spec>` fails if it does not find an
unambiguous match for `spec`.
|
|
|
|
Also update the mpileaks unit test to avoid a conflict on CentOS 6
where Dyninst >=11.0.0 no longer builds due to a compiler version
conflict.
|