summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2021-11-05spack spec: add --reuse argumentMassimiliano Culpo4-16/+33
2021-11-05concretizer: get rid of last maximize directive in concretize.lpTodd Gamblin1-32/+5
- [x] Get rid of forgotten maximize directive. - [x] Simplify variant handling - [x] Fix bug in treatment of defaults on externals (don't count non-default variants on externals against them)
2021-11-05Trim dependencies on externalsMassimiliano Culpo1-1/+2
2021-11-05Fix a unit test to match the new OS semanticsMassimiliano Culpo1-1/+1
CNL, debian6 and Suse are not compatible
2021-11-05ASP-based solve: if an OS is set, respect the valueMassimiliano Culpo1-0/+3
2021-11-05Fix a type in "variant_not_default" ruleMassimiliano Culpo1-1/+1
2021-11-05concretizer: rework spack solve output to handle reuse betterTodd Gamblin3-75/+137
2021-11-05spec: ensure_valid_variants() should not validate concrete specsTodd Gamblin1-0/+4
Variants in concrete specs are "always" correct -- or at least we assume them to be b/c they were concretized before. Their variants need not match the current version of the package.
2021-11-05concretizer: unify handling of single- and multi-valued variantsTodd Gamblin1-18/+5
Multi-valued variants previously maximized default values to handle cases where the default contained two values, e.g.: variant("foo", default="bar,baz") This is because previously we were minimizing non-default values, and `foo=bar`, `foo=baz`, and `foo=bar,baz` all had the same score, as none of them had any "non-default" values. This commit changes the approach and considers a non-default value to be either a value set to something not default *or* the absence of a default value from the set value. This allows multi- and single-valued variants to be handled the same way, with the same minimization criterion. It alse means that the "best" value for every optimization criterion is now zero, which allows us to make useful assumptions about the optimization criteria.
2021-11-05concretizer: reuse installs, but assign default values for new buildsTodd Gamblin1-35/+119
Minimizing builds is tricky. We want a minimizing criterion because we want to reuse the avaialble installs, but we also want things that have to be built to stick to *default preferences* from the package and from the user. We therefore treat built specs differently and apply a different set of optimization criteria to them. Spack's *first* priority is to reuse what it can, but if it builds something, the built specs will respect defaults and preferences. This is implemented by bumping the priority of optimization criteria for built specs -- so that they take precedence over the otherwise topmost-priority criterion to reuse what is installed. The scheme relies on all of our optimization criteria being minimizations. That is, we need the case where all specs are reused to be better than any built spec could be. Basically, if nothing is built, all the build criteria are zero (the best possible) and the number of built packages dominates. If something *has* to be built, it must be strictly worse than full reuse, because: 1. it increases the number of built specs 2. it must have either zero or some positive number for all criteria Our optimziation criteria effectively sum into two buckets at once to accomplish this. We use a `build_priority()` number to shift the priority of optimization criteria for built specs higher.
2021-11-05tests: make `spack diff` test more lenientTodd Gamblin1-4/+12
The constraints in the `spack diff` test were very specific and assumed a lot about the structure of what was being diffed. Relax them a bit to make them more resilient to changes.
2021-11-05concretizer: only minimize builds when `--reuse` is enabled.Todd Gamblin2-1/+4
Make the first minimization conditional on whether `--reuse` is enabled in the solve. If `--reuse` is not enabled, there will be nothing in the set to minimize and the objective function (for this criterion) will be 0 for every answer set.
2021-11-05concretizer: adjust integrity constraints to only apply to builds.Todd Gamblin1-6/+13
Many of the integrity constraints in the concretizer are there to restrict how solves are done, but they ignore that past solves may have had different initial conditions. For example, for things we're building, we want the allowed variants to be restricted to those currently in Spack packages, but if we are reusing a concrete spec, we need to be flexible about names that may have existed in old packages. Similarly, restrictions around compatibility of OS's, compiler versions, compiler OS support, etc. are really only about what is supported by the *current* set of compilers/build tools known to Spack, not about what we may get from concrete specs. - [x] restrict certain integrity constraints to only apply to packages that we need to build, and omit concrete specs from consideration.
2021-11-05concretizer: rework operating system semantics for installed packagesTodd Gamblin2-64/+104
The OS logic in the concretizer is still the way it was in the first version. Defaults are implemented in a fairly inflexible way using straight logic. Most of the other sections have been reworked to leave these kinds of decisions to optimization. This commit does that for OS's as well. As with targets, we optimize for target matches. We also try to optimize for OS matches between nodes. Additionally, this commit adds the notion of "OS compatibility" where we allow for builds to depend on binaries for certain other OS's. e.g, for macos, a bigsur build can depend on an already installed (concrete) catalina build. One cool thing about this is that we can declare additional compatible OS's later, e.g. CentOS and RHEL.
2021-11-05concretizer: `impose()` for concrete specs should use body facts.Todd Gamblin1-3/+3
The concretizer doesn't get a say in whether constraints from concrete specs are imposed, so use body facts for them.
2021-11-05include installed hashes in solve and optimize for reuseTodd Gamblin3-15/+104
2021-11-05rename `checked_spec_clauses()` to `spec_clauses()`Todd Gamblin1-8/+8
2021-11-05add `--reuse` option to `spack solve`Todd Gamblin2-6/+13
2021-11-04Rename the temporary scope for bootstrap buildcache (#27231)Massimiliano Culpo1-1/+1
If we don't rename Spack will fail with: ``` ImportError: cannot bootstrap the "clingo" Python module from spec "clingo-bootstrap@spack+python %gcc target=x86_64" due to the following failures: 'spack-install' raised ValueError: Invalid config scope: 'bootstrap'. Must be one of odict_keys(['_builtin', 'defaults', 'defaults/cray', 'bootstrap/cray', 'disable_modules', 'overrides-0']) Please run `spack -d spec zlib` for more verbose error messages ``` in case bootstrapping from binaries fails and we are falling back to bootstrapping from sources.
2021-11-04Sort arguments lexicographically in command's help (#27196)Massimiliano Culpo1-0/+5
2021-11-03sip: fix python_include_dir (#26953)Manuela Kuhn1-1/+3
2021-11-03Allow conditional variants (#24858)Greg Becker12-32/+193
A common question from users has been how to model variants that are new in new versions of a package, or variants that are dependent on other variants. Our stock answer so far has been an unsatisfying combination of "just have it do nothing in the old version" and "tell Spack it conflicts". This PR enables conditional variants, on any spec condition. The syntax is straightforward, and matches that of previous features.
2021-11-02Bootstrap GnuPG (#24003)Massimiliano Culpo5-75/+251
* GnuPG: allow bootstrapping from buildcache and sources * Add a test to bootstrap GnuPG from binaries * Disable bootstrapping in tests * Add e2e test to bootstrap GnuPG from sources on Ubuntu * Add e2e test to bootstrap GnuPG on macOS
2021-11-02Update docs how to display loaded modules (#27159)Richarda Butler1-2/+17
* Update spack load docs
2021-11-02Improved error messages from clingo (#26719)Greg Becker7-72/+223
This PR adds error message sentinels to the clingo solve, attached to each of the rules that could fail a solve. The unsat core is then restricted to these messages, which makes the minimization problem tractable. Errors that can only be generated by a bug in the logic program or generating code are prefaced with "Internal error" to make clear to users that something has gone wrong on the Spack side of things. * minimize unsat cores manually * only errors messages are choices/assumptions for performance * pre-check for unreachable nodes * update tests for new error message * make clingo concretization errors show up in cdash reports fully * clingo: make import of clingo.ast parsing routines robust to clingo version Older `clingo` has `parse_string`; newer `clingo` has `parse_files`. Make the code work wtih both. * make AST access functions backward-compatible with clingo 5.4.0 Clingo AST API has changed since 5.4.0; make some functions to help us handle both versions of the AST. Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2021-11-02relocate: do not change library id to use rpaths on package install (#27139)Seth R. Johnson3-23/+7
After #26608 I got a report about missing rpaths when building a downstream package independently using a spack-installed toolchain (@tmdelellis). This occurred because the spack-installed libraries were being linked into the downstream app, but the rpaths were not being manually added. Prior to #26608 autotools-installed libs would retain their hard-coded path and would thus propagate their link information into the downstream library on mac. We could solve this problem *if* the mac linker (ld) respected `LD_RUN_PATH` like it does on GNU systems, i.e. adding `rpath` entries to each item in the environment variable. However on mac we would have to manually add rpaths either using spack's compiler wrapper scripts or manually (e.g. using `CMAKE_BUILD_RPATH` and pointing to the libraries of all the autotools-installed spack libraries). The easier and safer thing to do for now is to simply stop changing the dylib IDs.
2021-11-02spack arch: add --generic argument (#27061)Michael Kuhn1-0/+8
The `--generic` argument allows printing the best generic target for the current machine. This can be quite handy when wanting to find the generic architecture to use when building a shared software stack for multiple machines.
2021-11-02Add tag filters to `spack test list` (#26842)Tamara Dahlgren1-5/+21
2021-11-01feature: add "spack tags" command (#26136)Tamara Dahlgren9-77/+490
This PR adds a "spack tags" command to output package tags or (available) packages with those tags. It also ensures each package is listed in the tag cache ONLY ONCE per tag.
2021-11-01Fix caching of spack.repo.all_package_names() (#26991)Massimiliano Culpo2-9/+16
fixes #24522
2021-10-29For Spack commands that fail but don't throw exceptions, we were discarding ↵Peter Scheibel1-1/+1
the return code (#27077)
2021-10-29config add: infer type based on JSON schema validation errors (#27035)Massimiliano Culpo3-19/+45
- [x] Allow dding enumerated types and types whose default value is forbidden by the schema - [x] Add a test for using enumerated types in the tests for `spack config add` - [x] Make `config add` tests use the `mutable_config` fixture so they do not affect other tests Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2021-10-28bugfix: config edit should work with a malformed `spack.yaml`Todd Gamblin2-10/+31
If you don't format `spack.yaml` correctly, `spack config edit` still fails and you have to edit your `spack.yaml` manually. - [x] Add some code to `_main()` to defer `ConfigFormatError` when loading the environment, until we know what command is being run. - [x] Make `spack config edit` use `SPACK_ENV` instead of the config scope object to find `spack.yaml`, so it can work even if the environment is bad. Co-authored-by: scheibelp <scheibel1@llnl.gov>
2021-10-28bugfix: `spack config get <section>` in environmentsTodd Gamblin1-4/+4
`spack config get <section>` was erroneously returning just the `spack.yaml` for the environment. It should return the combined configuration for that section (including anything from `spack.yaml`), even in an environment. - [x] reorder conditions in `cmd/config.py` to fix
2021-10-28config: ensure that options like `--debug` are set firstTodd Gamblin1-17/+38
`spack --debug config edit` was not working properly -- it would not do show a stack trace for configuration errors. - [x] Rework `_main()` and add some notes for maintainers on where things need to go for configuration to work properly. - [x] Move config setup to *after* command-line parsing is done. Co-authored-by: scheibelp <scheibel1@llnl.gov>
2021-10-28errors: Rework error handling in `main()`Todd Gamblin1-23/+48
`main()` has grown, and in some cases code that can generate errors has gotten outside the top-level try/catch in there. This means that simple errors like config issues give you large stack traces, which shouldn't happen without `--debug`. - [x] Split `main()` into `main()` for the top-level error handling and `_main()` with all logic.
2021-10-28config: fix `SPACK_DISABLE_LOCAL_CONFIG`, remove `$user_config_path` (#27022)Todd Gamblin6-17/+13
There were some loose ends left in ##26735 that cause errors when using `SPACK_DISABLE_LOCAL_CONFIG`. - [x] Fix hard-coded `~/.spack` references in `install_test.py` and `monitor.py` Also, if `SPACK_DISABLE_LOCAL_CONFIG` is used, there is the issue that `$user_config_path`, when used in configuration files, makes no sense, because there is no user config scope. Since we already have `$user_cache_path` in configuration files, and since there really shouldn't be *any* data stored in a configuration scope (which is what you'd configure in `config.yaml`/`bootstrap.yaml`/etc., this just removes `$user_config_path`. There will *always* be a `$user_cache_path`, as Spack needs to write files, but we shouldn't rely on the existence of a particular configuration scope in the Spack code, as scopes are configurable, both in number and location. - [x] Remove `$user_config_path` substitution. - [x] Fix reference to `$user_config_path` in `etc/spack/deaults/bootstrap.yaml` to refer to `$user_cache_path`, which is where it was intended to be.
2021-10-28Deactivate previous env before activating new one (#25409)Harmen Stoppels2-15/+23
* Deactivate previous env before activating new one Currently on develop you can run `spack env activate` multiple times to switch between environments, but they leave traces, even though Spack only supports one active environment at a time. Currently: ```console $ spack env create a $ spack env create b $ spack env activate -p a [a] $ spack env activate -p b [b] [a] $ spack env activate -p b [a] [b] [a] $ spack env activate -p a [a] [b] [c] $ echo $MANPATH | tr ":" "\n" /path/to/environments/a/.spack-env/view/share/man /path/to/environments/a/.spack-env/view/man /path/to/environments/b/.spack-env/view/share/man /path/to/environments/b/.spack-env/view/man ``` This PR fixes that: ```console $ spack env activate -p a [a] $ spack env activate -p b [b] $ spack env activate -p a [a] $ echo $MANPATH | tr ":" "\n" /path/to/environments/a/.spack-env/view/share/man /path/to/environments/a/.spack-env/view/man ```
2021-10-28YamlFilesystemView: improve file removal performance via batching (#24355)Robert Blackwell3-17/+23
* Drastically improve YamlFilesystemView file removal via batching The `remove_file` routine has to check if the file is owned by multiple packages, so it doesn't remove necessary files. This is done by the `get_all_specs` routine, which walks the entire package tree. With large numbers of packages on shared file systems, this can take seconds per file tree traversal, which adds up extremely quickly. For example, a single deactivate of a largish python package in our software stack on GPFS took approximately 40 minutes. This patch simply replaces `remove_file` with a batch `remove_files` routine. This routine removes a list of files rather than a single file, requiring only one traversal per batch. In practice this means a package can be removed in seconds time, rather than potentially hours, essentially a ~100x speedup (ignoring initial deactivation logic, which takes about 3 minutes in our test setup).
2021-10-28Fix sbang hook for non-writable files (#27007)Michael Kuhn2-0/+15
* Fix sbang hook for non-writable files PR #26793 seems to have broken the sbang hook for files with missing write permissions. Installing perl now breaks with the following error: ``` ==> [2021-10-28-12:09:26.832759] Error: PermissionError: [Errno 13] Permission denied: '$SPACK/opt/spack/linux-fedora34-zen2/gcc-11.2.1/perl-5.34.0-afuweplnhphcojcowsc2mb5ngncmczk4/bin/cpanm' ``` Temporarily add write permissions to the original file so it can be overwritten with the patched one. And test that file permissions are preserved in sbang even for non-writable files Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2021-10-28buildcaches: fix directory link relocation (#26948)Paul Ferrell2-3/+17
When relocating a binary distribution, Spack only checks files to see if they are a link that needs to be relocated. Directories can be such links as well, however, and need to undergo the same checks and potential relocation.
2021-10-27Remove documentation tests from GitHub Actions (#26981)Massimiliano Culpo1-9/+0
We moved documentation tests to readthedocs since a while, so remove the one on GitHub.
2021-10-27tests: speed up `spack list` tests (#26958)Todd Gamblin2-36/+41
`spack list` tests are not using mock packages for some reason, and many are marked as potentially slow. This isn't really necessary; we don't need 6,000 packages to test the command. - [x] update tests to use `mock_packages` fixture - [x] remove `maybeslow` annotations
2021-10-27Allow non-UTF-8 encoding in sbang hook (#26793)Harmen Stoppels2-71/+181
Currently Spack reads full files containing shebangs to memory as strings, meaning Spack would have to guess their encoding. Currently Spack has a fixed guess of UTF-8. This is unnecessary, since e.g. the Linux kernel does not assume an encoding on paths at all, it's just bytes and some delimiters on the byte level. This commit does the following: 1. Shebangs are treated as bytes, so that e.g. latin1 encoded files do not throw UnicodeEncoding errors, and adds a test for this. 2. No more bytes than necessary are read to memory, we only have to read until the first newline, and from there on we an copy the file byte by bytes instead of decoding and re-encoding text. 3. We cap the number of bytes read to 4096, if no newline is found before that, we don't attempt to patch it. 4. Add support for luajit too. This should make Spack both more efficient and usable for non-UTF8 files.
2021-10-27Fix assumption v.concrete => isinstance(v, Version) (#26537)Harmen Stoppels2-2/+23
* Add test * Only extend with Git version when using Version * xfail v.concrete test
2021-10-26config: overrides for caches and system and user scopes (#26735)Harmen Stoppels7-47/+228
Spack's `system` and `user` scopes provide ways for administrators and users to set global defaults for all Spack instances, but for use cases where one wants a clean Spack installation, these scopes can be undesirable. For example, users may want to opt out of global system configuration, or they may want to ignore their own home directory settings when running in a continuous integration environment. Spack also, by default, keeps various caches and user data in `~/.spack`, but users may want to override these locations. Spack provides three environment variables that allow you to override or opt out of configuration locations: * `SPACK_USER_CONFIG_PATH`: Override the path to use for the `user` (`~/.spack`) scope. * `SPACK_SYSTEM_CONFIG_PATH`: Override the path to use for the `system` (`/etc/spack`) scope. * `SPACK_DISABLE_LOCAL_CONFIG`: set this environment variable to completely disable *both* the system and user configuration directories. Spack will only consider its own defaults and `site` configuration locations. And one that allows you to move the default cache location: * `SPACK_USER_CACHE_PATH`: Override the default path to use for user data (misc_cache, tests, reports, etc.) With these settings, if you want to isolate Spack in a CI environment, you can do this: export SPACK_DISABLE_LOCAL_CONFIG=true export SPACK_USER_CACHE_PATH=/tmp/spack This is a stop-gap approach until we have figured out how to deal with the system and user config scopes more generally, as there are plans to potentially / eventually get rid of them. **User config** Spack is a bit of a pain when you have: - a shared $HOME folder across different systems. - multiple Spack versions on the same system. **System config** - On shared systems with a versioned programming environment / toolkit, system administrators want to provide config for each version (e.g. 21.09, 21.10) of the programming environment, and the user Spack instance should be able to pick this up without a steep learning curve. - On shared systems the user should be able to opt out of the hard-coded config scope in /etc/spack, since it may be incompatible with their particular instance. Currently Spack can only opt out of all config scopes through overrides with `"config:":`, `"packages:":`, but that also drops the defaults config, which would have to be repeated, which is undesirable, especially the lengthy packages.yaml. An example use case is: having config in this folder: ``` /path/to/programming/environment/{version}/{compilers,packages}.yaml ``` and have `module load spack-system-config` set the variable ``` SPACK_SYSTEM_CONFIG_PATH=/path/to/programming/environment/{version} ``` where the user no longer has to worry about what `{version}` they are on. **Continuous integration** Finally, there is the use case of continuous integration, which may clone an arbitrary Spack version, which optimally should not pick up system or user config from the previous run (like may happen in classical bare metal non-containerized filesystem side effect ridden jenkins pipelines). In fact this is very similar to how spack itself tries to avoid picking up system dependencies during builds... **But environments solve this?** - You could do `include`s in environment files to get similar behavior to the spack_system_config_path example, but environments require you to: 1) require paths to individual config files, not directories. 2) fail if the listed config file does not exist - They allow you to override config scopes, but this is generally too rigurous, as it requires you to repeat the default config, in particular packages.yaml, and just defies the point of layered config. Co-authored-by: Tom Scogland <tscogland@llnl.gov> Co-authored-by: Tim Fuller <tjfulle@sandia.gov> Co-authored-by: Steve Leak <sleak@lbl.gov> Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2021-10-26modules: allow user to remove arch dir (#24156)Greg Becker8-11/+58
* allow no arch-dir modules * add tests for modules with no arch * document arch-specific module roots
2021-10-26modules: configurable module defaults (#24367)Greg Becker4-3/+86
Any spec satisfying a default will be symlinked to `default` If multiple specs have modulefiles in the same directory and satisfy configured module defaults, then whichever was written last will be default.
2021-10-25containerize: pin the Spack version used in a container (#21910)Massimiliano Culpo11-86/+447
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442: ```yaml container: images: os: amazonlinux:2 spack: ref: develop resolve_sha: true ``` The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`. Modifications: - [x] Permit to pin the version of Spack, either by branch or tag or sha - [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1) - [x] Permit to print the bootstrap image as a standalone - [x] Add documentation on the new part of the schema - [x] Add unit tests for different use cases
2021-10-25cuda: add 11.4.1, 11.4.2, 11.5.0. (#26892)Olli Lupton1-1/+4
* cuda: add 11.4.1, 11.4.2, 11.5.0. Note that the curses dependency from cuda-gdb was dropped in 11.4.0. * Update clang/gcc constraints. * Address review, assume clang 12 is OK from 11.4.1 onwards. * superlu-dist@7.1.0 conflicts with cuda@11.5.0. * Update var/spack/repos/builtin/packages/superlu-dist/package.py Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com> Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>