summaryrefslogtreecommitdiff
path: root/share
AgeCommit message (Collapse)AuthorFilesLines
2022-08-08e4s oneapi ci: unify when possible (#31965)eugeneswalker1-1/+1
2022-08-03asp: refactor low level API to permit the injection of configurationMassimiliano Culpo1-1/+1
This allows writing extension commands that can benchmark different configurations in clingo, or try different configurations for a single test.
2022-08-03sundials@6.1.0:6.2.0 +rocm: patch nvector to use pic (#31910)eugeneswalker1-0/+1
* sundials@6.1.0:6.2.0 +rocm: patch nvector to use pic * e4s ci: add sundials +rocm
2022-08-02e4s oneapi ci: build vtk-m ~openmp due to issue #31830 (#31840)eugeneswalker1-2/+3
2022-08-02butterflypack %oneapi: patch CMakeLists to solve issue #31818 (#31848)eugeneswalker1-3/+2
* butterflypack %oneapi: patch CMakeLists to solve issue #31818 * uncomment builds affected by failing butterflypack Co-authored-by: e <e>
2022-08-01e4s oneapi stack: remove notes for now-fixed builds (#31839)eugeneswalker1-3/+1
2022-07-31style: simplify arguments with `--tool TOOL` and `--skip TOOL`Todd Gamblin1-1/+1
`spack style` tests were annoyingly brittle because we could not easily be specific about which tools to run (we had to use `--no-black`, `--no-isort`, `--no-flake8`, and `--no-mypy`). We should be able to specify what to run OR what to skip. Now you can run, e.g.: spack style --tool black,flake8 or: spack style --skip black,isort - [x] Remove `--no-black`, `--no-isort`, `--no-flake8`, and `--no-mypy` args. - [x] Add `--tool TOOL` argument. - [x] Add `--skip TOOL` argument. - [x] Allow either `--tool black --tool flake8` or `--tool black,flake8` syntax.
2022-07-31black: fix style check package and flake8 formatting for blackTodd Gamblin1-34/+5
Black will automatically fix a lot of the exceptions we previously allowed for directives, so we don't need them in our custom `flake8_formatter` anymore. - [x] remove `E501` (long line) exceptions for directives from `flake8_formatter`, as they won't help us now. - [x] Refine exceptions for long URLs in the `flake8_formatter`. - [x] Adjust the mock `flake8-package` to exhibit the exceptions we still allow. - [x] Update style tests for new `flake8-package`. - [x] Blacken style test.
2022-07-31black: configurationTodd Gamblin1-1/+1
This adds necessary configuration for flake8 and black to work together. This also sets the line length to 99, per the data here: * https://github.com/spack/spack/pull/24718#issuecomment-876933636 Given the data and the spirit of black's 88-character limit, we set the limit to 99 characters for all of Spack, because: * 99 is one less than 100, a nice round number, and all lines will fit in a 100-character wide terminal (even when the text editor puts a \ at EOL). * 99 is just past the knee the file size curve for packages, and it means that packages remain readable and not significantly longer than they are now. * It doesn't seem to hurt core -- files in core might change length by a few percent but seem like they'll be mostly the same as before -- just a bit more roomy. - [x] set line length to 99 - [x] remove most exceptions from `.flake8` and add the ones black cares about - [x] add `[tool.black]` to `pyproject.toml` - [x] make `black` run if available in `spack style --fix` Co-Authored-By: Tom Scogland <tscogland@llnl.gov>
2022-07-29e4s oneapi ci: uncomment pdt (#31803)eugeneswalker1-2/+6
* e4s oneapi ci: uncomment pdt * load oneapi compiler module before executing `spack ci rebuild`
2022-07-29e4s ci: add tasmanian +rocm (#31606)eugeneswalker1-0/+1
2022-07-29e4s oneapi ci: uncomment parallel-netcdf (#31804)eugeneswalker1-2/+1
2022-07-29e4s ci: add oneapi stack (#31781)eugeneswalker2-0/+503
* e4s ci: add oneapi stack * shorten padded_length to 256 * comment out pdt and add failure note
2022-07-27e4s ci: add slate +rocm (#31602)eugeneswalker1-0/+1
2022-07-26e4s ci stack: add spec: hdf5-vol-async (#31747)eugeneswalker1-0/+1
* e4s ci stack: add spec: hdf5-vol-async * hdf5-vol-async: add e4s tag
2022-07-20spack stage: add missing --fresh and --reuse (#31626)Harmen Stoppels1-1/+1
2022-07-18e4s ci: add ginkgo +rocm (#31603)eugeneswalker1-0/+1
2022-07-18containerize: fix missing environment activation (#31596)Massimiliano Culpo1-2/+1
2022-07-14update e4s to reflect june status (#31032)eugeneswalker2-182/+190
2022-07-07removing feature bloat: monitor and analyzers (#31130)Vanessasaurus3-33/+5
Signed-off-by: vsoch <vsoch@users.noreply.github.com> Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2022-06-30gitlab: Fix mirror url to match stack name (#31399)Scott Wittenburg1-2/+2
We adopted the convention of putting binaries for each stack into a dedicated mirror named after the directory in which the stack (spack.yaml file) resides. This fixes the mirror url of the radiuss-aws-aarch64 stack to follow that convention.
2022-06-29Update containerize templates to account for view indirection (#31321)Massimiliano Culpo2-0/+2
fixes #30965
2022-06-29Modify dockerfile template, so that any command can be executed (#29741)Marco De La Pierre1-1/+2
2022-06-28AWS RADIUSS builds (#31114)David Beckingsale3-0/+398
* Add AWS RADIUSS builds * Correct variable naming * Add two more MFEM specs * Updates to MFEM spec suggested by @v-dobrev * Simplify MFEM specs
2022-06-01CPU & memory requests for jobs that generate GitLab CI pipelines (#30940)Zack Galbreath1-0/+3
gitlab ci: make sure pipeline generation isn't resource starved
2022-05-30Added AWS-AHUG alinux2 pipeline (#24601)Evan Bollig3-0/+737
Add spack stacks targeted at Spack + AWS + ARM HPC User Group hackathon. Includes a list of miniapps and full-apps that are ready to run on both x86_64 and aarch64. Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
2022-05-28Alinux isc buildcache (#30462)Evan Bollig3-0/+604
Add two new stacks targeted at x86_64 and arm, representing an initial list of packages used by current and planned AWS Workshops, and built in conjunction with the ISC22 announcement of the spack public binary cache. Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
2022-05-28refactor: packages import `spack.package` explicitly (#30404)Tom Scogland1-3/+3
Explicitly import package utilities in all packages, and corresponding fallout. This includes: * rename `spack.package` to `spack.package_base` * rename `spack.pkgkit` to `spack.package` * update all packages in builtin, builtin_mock and tutorials to include `from spack.package import *` * update spack style * ensure packages include the import * automatically add the new import and remove any/all imports of `spack` and `spack.pkgkit` from packages when using `--fix` * add support for type-checking packages with mypy when SPACK_MYPY_CHECK_PACKAGES is set in the environment * fix all type checking errors in packages in spack upstream * update spack create to include the new imports * update spack repo to inject the new import, injection persists to allow for a deprecation period Original message below: As requested @adamjstewart, update all packages to use pkgkit. I ended up using isort to do this, so repro is easy: ```console $ isort -a 'from spack.pkgkit import *' --rm 'spack' ./var/spack/repos/builtin/packages/*/package.py $ spack style --fix ``` There were several line spacing fixups caused either by space manipulation in isort or by packages that haven't been touched since we added requirements, but there are no functional changes in here. * [x] add config to isort to make sure this is maintained going forward
2022-05-28update tutorial command for v0.18.0 and new gpg key (#30904)Greg Becker1-34/+161
2022-05-26ci: Support secure binary signing on protected pipelines (#30753)Scott Wittenburg8-94/+212
This PR supports the creation of securely signed binaries built from spack develop as well as release branches and tags. Specifically: - remove internal pr mirror url generation logic in favor of buildcache destination on command line - with a single mirror url specified in the spack.yaml, this makes it clearer where binaries from various pipelines are pushed - designate some tags as reserved: ['public', 'protected', 'notary'] - these tags are stripped from all jobs by default and provisioned internally based on pipeline type - update gitlab ci yaml to include pipelines on more protected branches than just develop (so include releases and tags) - binaries from all protected pipelines are pushed into mirrors including the branch name so releases, tags, and develop binaries are kept separate - update rebuild jobs running on protected pipelines to run on special runners provisioned with an intermediate signing key - protected rebuild jobs no longer use "SPACK_SIGNING_KEY" env var to obtain signing key (in fact, final signing key is nowhere available to rebuild jobs) - these intermediate signatures are verified at the end of each pipeline by a new signing job to ensure binaries were produced by a protected pipeline - optionallly schedule a signing/notary job at the end of the pipeline to sign all packges in the mirror - add signing-job-attributes to gitlab-ci section of spack environment to allow configuration - signing job runs on special runner (separate from protected rebuild runners) provisioned with public intermediate key and secret signing key
2022-05-24Add a command to generate a local mirror for bootstrapping (#28556)Massimiliano Culpo4-1/+52
This PR builds on #28392 by adding a convenience command to create a local mirror that can be used to bootstrap Spack. This is to overcome the inconvenience in setting up this mirror manually, which has been reported when trying to setup Spack on air-gapped systems. Using this PR the user can create a bootstrapping mirror, on a machine with internet access, by: % spack bootstrap mirror --binary-packages /opt/bootstrap ==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror ==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror ==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror ==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror To register the mirror on the platform where it's supposed to be used run the following command(s): % spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources % spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries The mirror has to be moved over to the air-gapped system, and registered using the commands shown at prompt. The command has options to: 1. Add pre-built binaries downloaded from Github (default is not to add them) 2. Add development dependencies for Spack (currently the Python packages needed to use spack style) * bootstrap: refactor bootstrap.yaml to move sources metadata out * bootstrap: allow adding/removing custom bootstrapping sources This operation can be performed from the command line since new subcommands have been added to `spack bootstrap` * Add --trust argument to spack bootstrap add * Add a command to generate a local mirror for bootstrapping * Add a unit test for mirror creation
2022-05-24Best effort co-concretization (iterative algorithm) (#28941)Massimiliano Culpo1-1/+1
Currently, environments can either be concretized fully together or fully separately. This works well for users who create environments for interoperable software and can use `concretizer:unify:true`. It does not allow environments with conflicting software to be concretized for maximal interoperability. The primary use-case for this is facilities providing system software. Facilities provide multiple MPI implementations, but all software built against a given MPI ought to be interoperable. This PR adds a concretization option `concretizer:unify:when_possible`. When this option is used, Spack will concretize specs in the environment separately, but will optimize for minimal differences in overlapping packages. * Add a level of indirection to root specs This commit introduce the "literal" atom, which comes with a few different "arities". The unary "literal" contains an integer that id the ID of a spec literal. Other "literals" contain information on the requests made by literal ID. For instance zlib@1.2.11 generates the following facts: literal(0,"root","zlib"). literal(0,"node","zlib"). literal(0,"node_version_satisfies","zlib","1.2.11"). This should help with solving large environments "together where possible" since later literals can be now solved together in batches. * Add a mechanism to relax the number of literals being solved * Modify spack solve to display the new criteria Since the new criteria is above all the build criteria, we need to modify the way we display the output. Originally done by Greg in #27964 and cherry-picked to this branch by the co-author of the commit. Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com> * Inject reusable specs into the solve Instead of coupling the PyclingoDriver() object with spack.config, inject the concrete specs that can be reused. A method level function takes care of reading from the store and the buildcache. * spack solve: show output of multi-rounds * add tests for best-effort coconcretization * Enforce having at least a literal being solved Co-authored-by: Greg Becker <becker33@llnl.gov>
2022-05-23Revert "Added cloud_pipline for E4S on Amazon Linux (#29522)" (#30796)Scott Wittenburg3-985/+0
This reverts commit 07e9c0695af09bbf5d1ebe51bb6f2b34eb88a3df.
2022-05-23Added cloud_pipline for E4S on Amazon Linux (#29522)Evan Bollig3-0/+985
Add two new cloud pipelines for E4S on Amazon Linux, include arm and x86 (v3 + v4) stacks. Notes: - Updated mpark-variant to remove conflict that no longer exists in Amazon Linux - Which command on Amazon Linux prefixes on all results when padded_length is too high. In this case, padded_length<=503 works as expected. Chose conservative length of 384.
2022-05-23Deprecate `spack:concretization` over `concretizer:unify` (#30038)Harmen Stoppels7-7/+7
* Introduce concretizer:unify option to replace spack:concretization * Deprecate concretization * Make spack:concretization overrule concretize:unify for now * Add environment update logic to move from spack:concretization to spack:concretizer:reuse * Migrate spack:concretization to spack:concretize:unify in all locations * For new environments make concretizer:unify explicit, so that defaults can be changed in 0.19
2022-05-23ci: Map visit to huge instance for the data-vis-sdk pipeline (#30779)Chuck Atkins1-0/+1
2022-05-20visit: Overhaul to build in the DAV SDK (#30594)Chuck Atkins1-19/+18
* mesa-glu and mesa-demos: Fix conflicts with glu and osmesa * visit: Update visit dependencies * ecp-data-vis-sdk: Enable +visit * ci[data-vis-sdk]: Enable +visit
2022-05-20errors: model error messages as an optimization problem (#30669)Greg Becker1-1/+1
Error messages for the clingo concretizer have proven challenging. The current messages are incredibly vague and often don't help users at all. Unsat cores in clingo are not guaranteed to be minimal, and lead to cores that are either not useful or need to be post-processed for hours to reach a minimal core. Following up on an idea from a slack conversation with kwryankrattiger on slack, this PR takes a new approach. We eliminate most integrity constraints and minima/maxima on choice rules in clingo, and instead force invalid states to imply an error predicate. The error predicate can include context on the cause of the error (Package, Version, etc). These error predicates are then heavily optimized against, to ensure that we do not include error facts in the solution when a solution with no error facts could be generated. When post-processing the clingo solution to construct specs, any error facts cause the program to raise an exception. This leads to much more legible error messages. Each error predicate includes a priority and an error message. The error message is formatted by the remaining arguments to produce the error message. The priority is used to ensure that when clingo has a choice of which rules to violate, it chooses the one which will be most informative to the user. Performance: "fresh" concretizations appear to suffer a ~20% performance penalty under this branch, while "reuse" concretizations see a speedup of around 33%. Possible optimizations if users still see unhelpful messages: There are currently 3 levels of priority of the error messages. Additional priorities are possible, and can allow us finer granularity to ensure more informative error messages are provided in lieu of less informative ones. Future work: Improve tests to ensure that every possible rule implying an error message is exercised
2022-05-14Update GitLab environment variable name (#30671)Zack Galbreath1-2/+2
Use the IAM credentials that correspond to our new binary mirror (s3://spack-binaries vs. s3://spack-binaries-develop)
2022-05-13Remove all uses of `runtime_hash`; document lockfile formats and fix testsTodd Gamblin1-1/+1
This removes all but one usage of runtime hash. The runtime hash was being used to write historical lockfiles for tests, but we don't need it for that; we can just save those lockfiles. - [x] add legacy lockfiles for v1, v2, v3 - [x] fix bugs with v1 lockfile tests (the dummy lockfile we were writing was not actually a v1 lockfile because it used the new spec file format). - [x] remove all but one runtime_hash usage -- that one needs a small rework of the concretizer to really fix, as it's about separate concretization of build dependencies. - [x] Document the history of the lockfile format in `environment/__init__.py`
2022-05-13gitlab ci: switch over to new bucket for all stacksScott Wittenburg7-7/+7
2022-05-13hashes: remove full_hash and build_hash from spackScott Wittenburg1-2/+2
2022-05-13Reuse concretization by default (#30396)Massimiliano Culpo7-0/+21
* Enable reuse by default in Spack * Update documentation to match new default * Configure pipelines not to reuse software
2022-05-13tutorial stack: allow deprecated versions (#30648)Todd Gamblin1-0/+3
For tutorial builds, we should continue to allow deprecated builds to be installed. We can update them as needed when we update the tutorial, but we don't need to correct them immediately on deprecation in CI. - [x] add `deprecated:true` to tutorial `spack.yaml` config.
2022-05-10gitlab ci: do not override .generate tags for e4s (#30571)Scott Wittenburg1-1/+0
2022-05-09e4s on mac ci: set SPACK_DISABLE_LOCAL_CONFIG=1 (#30568)eugeneswalker2-4/+43
* e4s on mac ci: set SPACK_DISABLE_LOCAL_CONFIG=1 * export SPACK_USER_CACHE_PATH so that ~/.spack/... isn't used
2022-05-06ci: Enable the ParaView GUI in the DAVSDK pipeline (#30473)Chuck Atkins1-5/+7
* ci: Enable the ParaView GUI in the DAVSDK pipeline * qt: Patch for long paths in ci
2022-05-05Makefile generator for parallel spack install of environments (#30254)Harmen Stoppels1-1/+5
`make` solves a lot of headaches that would otherwise have to be implemented in Spack: 1. Parallelism over packages through multiple `spack install` processes 2. Orderly output of parallel package installs thanks to `make --sync-output=recurse` or `make -Orecurse` (works well in GNU Make 4.3; macOS is unfortunately on a 16 years old 3.x version, but it's one `spack install gmake` away...) 3. Shared jobserver across packages, which means a single `-j` to rule them all, instead of manually finding a balance between `#spack install processes` & `#jobs per package` (See #30302). This pr adds the `spack env depfile` command that generates a Makefile with dag hashes as targets, and dag hashes of dependencies as prerequisites, and a command along the lines of `spack install --only=packages /hash` to just install a single package. It exposes two convenient phony targets: `all`, `fetch-all`. The former installs the environment, the latter just fetches all sources. So one can either use `make all -j16` directly or run `make fetch-all -j16` on a login node and `make all -j16` on a compute node. Example: ```yaml spack: specs: [perl] view: false ``` running ``` $ spack -e . env depfile --make-target-prefix env | tee Makefile ``` generates ```Makefile SPACK ?= spack .PHONY: env/all env/fetch-all env/clean env/all: env/env env/fetch-all: env/fetch env/env: env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww @touch $@ env/fetch: env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc @touch $@ env/dirs: @mkdir -p env/.fetch env/.install env/.fetch/%: | env/dirs $(info Fetching $(SPEC)) $(SPACK) -e '/tmp/tmp.7PHPSIRACv' fetch $(SPACK_FETCH_FLAGS) /$(notdir $@) && touch $@ env/.install/%: env/.fetch/% $(info Installing $(SPEC)) +$(SPACK) -e '/tmp/tmp.7PHPSIRACv' install $(SPACK_INSTALL_FLAGS) --only-concrete --only=package --no-add /$(notdir $@) && touch $@ # Set the human-readable spec for each target env/%/cdqldivylyxocqymwnfzmzc5sx2zwvww: SPEC = perl@5.34.1%gcc@10.3.0+cpanm+shared+threads arch=linux-ubuntu20.04-zen2 env/%/gv5kin2xnn33uxyfte6k4a3bynhmtxze: SPEC = berkeley-db@18.1.40%gcc@10.3.0+cxx~docs+stl patches=b231fcc arch=linux-ubuntu20.04-zen2 env/%/cuymc7e5gupwyu7vza5d4vrbuslk277p: SPEC = bzip2@1.0.8%gcc@10.3.0~debug~pic+shared arch=linux-ubuntu20.04-zen2 env/%/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: SPEC = diffutils@3.8%gcc@10.3.0 arch=linux-ubuntu20.04-zen2 env/%/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws: SPEC = libiconv@1.16%gcc@10.3.0 libs=shared,static arch=linux-ubuntu20.04-zen2 env/%/yfz2agazed7ohevqvnrmm7jfkmsgwjao: SPEC = gdbm@1.19%gcc@10.3.0 arch=linux-ubuntu20.04-zen2 env/%/73t7ndb5w72hrat5hsax4caox2sgumzu: SPEC = readline@8.1%gcc@10.3.0 arch=linux-ubuntu20.04-zen2 env/%/trvdyncxzfozxofpm3cwgq4vecpxixzs: SPEC = ncurses@6.2%gcc@10.3.0~symlinks+termlib abi=none arch=linux-ubuntu20.04-zen2 env/%/sbzszb7v557ohyd6c2ekirx2t3ctxfxp: SPEC = pkgconf@1.8.0%gcc@10.3.0 arch=linux-ubuntu20.04-zen2 env/%/c4go4gxlcznh5p5nklpjm644epuh3pzc: SPEC = zlib@1.2.12%gcc@10.3.0+optimize+pic+shared patches=0d38234 arch=linux-ubuntu20.04-zen2 # Install dependencies env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww: env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p: env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk: env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao: env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu: env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs: env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/clean: rm -f -- env/env env/fetch env/.fetch/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.fetch/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.fetch/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.fetch/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.fetch/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.fetch/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.fetch/73t7ndb5w72hrat5hsax4caox2sgumzu env/.fetch/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.fetch/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.fetch/c4go4gxlcznh5p5nklpjm644epuh3pzc env/.install/cdqldivylyxocqymwnfzmzc5sx2zwvww env/.install/gv5kin2xnn33uxyfte6k4a3bynhmtxze env/.install/cuymc7e5gupwyu7vza5d4vrbuslk277p env/.install/7vangk4jvsdgw6u6oe6ob63pyjl5cbgk env/.install/hyb7ehxxyqqp2hiw56bzm5ampkw6cxws env/.install/yfz2agazed7ohevqvnrmm7jfkmsgwjao env/.install/73t7ndb5w72hrat5hsax4caox2sgumzu env/.install/trvdyncxzfozxofpm3cwgq4vecpxixzs env/.install/sbzszb7v557ohyd6c2ekirx2t3ctxfxp env/.install/c4go4gxlcznh5p5nklpjm644epuh3pzc ``` Then with `make -O` you get very nice orderly output when packages are built in parallel: ```console $ make -Orecurse -j16 spack -e . install --only-concrete --only=package /c4go4gxlcznh5p5nklpjm644epuh3pzc && touch c4go4gxlcznh5p5nklpjm644epuh3pzc ==> Installing zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc ... Fetch: 0.00s. Build: 0.88s. Total: 0.88s. [+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/zlib-1.2.12-c4go4gxlcznh5p5nklpjm644epuh3pzc spack -e . install --only-concrete --only=package /sbzszb7v557ohyd6c2ekirx2t3ctxfxp && touch sbzszb7v557ohyd6c2ekirx2t3ctxfxp ==> Installing pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp ... Fetch: 0.00s. Build: 3.96s. Total: 3.96s. [+] /tmp/tmp.b1eTyAOe85/store/linux-ubuntu20.04-zen2/gcc-10.3.0/pkgconf-1.8.0-sbzszb7v557ohyd6c2ekirx2t3ctxfxp ``` For Perl, at least for me, using `make -j16` versus `spack -e . install -j16` speeds up the builds from 3m32.623s to 2m22.775s, as some configure scripts run in parallel. Another nice feature is you can do Makefile "metaprogramming" and depend on packages built by Spack. This example fetches all sources (in parallel) first, print a message, and only then build packages (in parallel). ```Makefile SPACK ?= spack .PHONY: env all: env spack.lock: spack.yaml $(SPACK) -e . concretize -f env.mk: spack.lock $(SPACK) -e . env depfile -o $@ --make-target-prefix spack fetch: spack/fetch @echo Fetched all packages && touch $@ env: fetch spack/env @echo This executes after the environment has been installed clean: rm -rf spack/ env.mk spack.lock ifeq (,$(filter clean,$(MAKECMDGOALS))) include env.mk endif ```
2022-05-05spack external find: add search path customization (#30479)Greg Becker1-1/+1
2022-05-04gitlab: Remove temporary storage url from all stacks (#29949)Scott Wittenburg7-10/+2
Gitlab pipelines run for spack already have other S3 storage locations configured for storage of binaries, so this PR removes the redundant per-pipeline mirror. As a result, the "cleanup" jobs will no longer be generated at the end of each pipeline, removing one possible point of pipeline failure.