Age | Commit message (Collapse) | Author | Files | Lines |
|
* Update spack load docs
|
|
This PR adds error message sentinels to the clingo solve, attached to each of the rules that could fail a solve. The unsat core is then restricted to these messages, which makes the minimization problem tractable. Errors that can only be generated by a bug in the logic program or generating code are prefaced with "Internal error" to make clear to users that something has gone wrong on the Spack side of things.
* minimize unsat cores manually
* only errors messages are choices/assumptions for performance
* pre-check for unreachable nodes
* update tests for new error message
* make clingo concretization errors show up in cdash reports fully
* clingo: make import of clingo.ast parsing routines robust to clingo version
Older `clingo` has `parse_string`; newer `clingo` has `parse_files`. Make the
code work wtih both.
* make AST access functions backward-compatible with clingo 5.4.0
Clingo AST API has changed since 5.4.0; make some functions to help us
handle both versions of the AST.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
After #26608 I got a report about missing rpaths when building a
downstream package independently using a spack-installed toolchain
(@tmdelellis). This occurred because the spack-installed libraries were
being linked into the downstream app, but the rpaths were not being
manually added. Prior to #26608 autotools-installed libs would retain
their hard-coded path and would thus propagate their link information
into the downstream library on mac.
We could solve this problem *if* the mac linker (ld) respected
`LD_RUN_PATH` like it does on GNU systems, i.e. adding `rpath` entries
to each item in the environment variable. However on mac we would have
to manually add rpaths either using spack's compiler wrapper scripts or
manually (e.g. using `CMAKE_BUILD_RPATH` and pointing to the libraries of
all the autotools-installed spack libraries).
The easier and safer thing to do for now is to simply stop changing the
dylib IDs.
|
|
The `--generic` argument allows printing the best generic target for the
current machine. This can be quite handy when wanting to find the
generic architecture to use when building a shared software stack for
multiple machines.
|
|
|
|
This PR adds a "spack tags" command to output package tags or
(available) packages with those tags. It also ensures each package
is listed in the tag cache ONLY ONCE per tag.
|
|
fixes #24522
|
|
the return code (#27077)
|
|
- [x] Allow dding enumerated types and types whose default value is forbidden by the schema
- [x] Add a test for using enumerated types in the tests for `spack config add`
- [x] Make `config add` tests use the `mutable_config` fixture so they do not
affect other tests
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
If you don't format `spack.yaml` correctly, `spack config edit` still fails and
you have to edit your `spack.yaml` manually.
- [x] Add some code to `_main()` to defer `ConfigFormatError` when loading the
environment, until we know what command is being run.
- [x] Make `spack config edit` use `SPACK_ENV` instead of the config scope
object to find `spack.yaml`, so it can work even if the environment is bad.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
|
|
`spack config get <section>` was erroneously returning just the `spack.yaml`
for the environment.
It should return the combined configuration for that section (including
anything from `spack.yaml`), even in an environment.
- [x] reorder conditions in `cmd/config.py` to fix
|
|
`spack --debug config edit` was not working properly -- it would not do show a
stack trace for configuration errors.
- [x] Rework `_main()` and add some notes for maintainers on where things need
to go for configuration to work properly.
- [x] Move config setup to *after* command-line parsing is done.
Co-authored-by: scheibelp <scheibel1@llnl.gov>
|
|
`main()` has grown, and in some cases code that can generate errors has gotten
outside the top-level try/catch in there. This means that simple errors like
config issues give you large stack traces, which shouldn't happen without
`--debug`.
- [x] Split `main()` into `main()` for the top-level error handling and
`_main()` with all logic.
|
|
There were some loose ends left in ##26735 that cause errors when
using `SPACK_DISABLE_LOCAL_CONFIG`.
- [x] Fix hard-coded `~/.spack` references in `install_test.py` and `monitor.py`
Also, if `SPACK_DISABLE_LOCAL_CONFIG` is used, there is the issue that
`$user_config_path`, when used in configuration files, makes no sense,
because there is no user config scope.
Since we already have `$user_cache_path` in configuration files, and since there
really shouldn't be *any* data stored in a configuration scope (which is what
you'd configure in `config.yaml`/`bootstrap.yaml`/etc., this just removes
`$user_config_path`.
There will *always* be a `$user_cache_path`, as Spack needs to write files, but
we shouldn't rely on the existence of a particular configuration scope in the
Spack code, as scopes are configurable, both in number and location.
- [x] Remove `$user_config_path` substitution.
- [x] Fix reference to `$user_config_path` in `etc/spack/deaults/bootstrap.yaml`
to refer to `$user_cache_path`, which is where it was intended to be.
|
|
* Deactivate previous env before activating new one
Currently on develop you can run `spack env activate` multiple times to switch
between environments, but they leave traces, even though Spack only supports
one active environment at a time.
Currently:
```console
$ spack env create a
$ spack env create b
$ spack env activate -p a
[a] $ spack env activate -p b
[b] [a] $ spack env activate -p b
[a] [b] [a] $ spack env activate -p a
[a] [b] [c] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
/path/to/environments/b/.spack-env/view/share/man
/path/to/environments/b/.spack-env/view/man
```
This PR fixes that:
```console
$ spack env activate -p a
[a] $ spack env activate -p b
[b] $ spack env activate -p a
[a] $ echo $MANPATH | tr ":" "\n"
/path/to/environments/a/.spack-env/view/share/man
/path/to/environments/a/.spack-env/view/man
```
|
|
* Drastically improve YamlFilesystemView file removal via batching
The `remove_file` routine has to check if the file is owned by multiple packages, so it doesn't
remove necessary files. This is done by the `get_all_specs` routine, which walks the entire
package tree. With large numbers of packages on shared file systems, this can take seconds
per file tree traversal, which adds up extremely quickly. For example, a single deactivate
of a largish python package in our software stack on GPFS took approximately 40 minutes.
This patch simply replaces `remove_file` with a batch `remove_files` routine. This routine
removes a list of files rather than a single file, requiring only one traversal per batch. In
practice this means a package can be removed in seconds time, rather than potentially hours,
essentially a ~100x speedup (ignoring initial deactivation logic, which takes about 3 minutes
in our test setup).
|
|
* Fix sbang hook for non-writable files
PR #26793 seems to have broken the sbang hook for files with missing
write permissions. Installing perl now breaks with the following error:
```
==> [2021-10-28-12:09:26.832759] Error: PermissionError: [Errno 13] Permission denied: '$SPACK/opt/spack/linux-fedora34-zen2/gcc-11.2.1/perl-5.34.0-afuweplnhphcojcowsc2mb5ngncmczk4/bin/cpanm'
```
Temporarily add write permissions to the original file so it can be
overwritten with the patched one.
And test that file permissions are preserved in sbang even for non-writable files
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
|
|
When relocating a binary distribution, Spack only checks files to see
if they are a link that needs to be relocated. Directories can be
such links as well, however, and need to undergo the same checks
and potential relocation.
|
|
We moved documentation tests to readthedocs since a while,
so remove the one on GitHub.
|
|
`spack list` tests are not using mock packages for some reason, and many
are marked as potentially slow. This isn't really necessary; we don't need
6,000 packages to test the command.
- [x] update tests to use `mock_packages` fixture
- [x] remove `maybeslow` annotations
|
|
Currently Spack reads full files containing shebangs to memory as
strings, meaning Spack would have to guess their encoding. Currently
Spack has a fixed guess of UTF-8.
This is unnecessary, since e.g. the Linux kernel does not assume an
encoding on paths at all, it's just bytes and some delimiters on the
byte level.
This commit does the following:
1. Shebangs are treated as bytes, so that e.g. latin1 encoded files do
not throw UnicodeEncoding errors, and adds a test for this.
2. No more bytes than necessary are read to memory, we only have to read
until the first newline, and from there on we an copy the file byte by
bytes instead of decoding and re-encoding text.
3. We cap the number of bytes read to 4096, if no newline is found
before that, we don't attempt to patch it.
4. Add support for luajit too.
This should make Spack both more efficient and usable for non-UTF8
files.
|
|
* Add test
* Only extend with Git version when using Version
* xfail v.concrete test
|
|
Spack's `system` and `user` scopes provide ways for administrators and
users to set global defaults for all Spack instances, but for use cases
where one wants a clean Spack installation, these scopes can be undesirable.
For example, users may want to opt out of global system configuration, or
they may want to ignore their own home directory settings when running in
a continuous integration environment.
Spack also, by default, keeps various caches and user data in `~/.spack`,
but users may want to override these locations.
Spack provides three environment variables that allow you to override or
opt out of configuration locations:
* `SPACK_USER_CONFIG_PATH`: Override the path to use for the
`user` (`~/.spack`) scope.
* `SPACK_SYSTEM_CONFIG_PATH`: Override the path to use for the
`system` (`/etc/spack`) scope.
* `SPACK_DISABLE_LOCAL_CONFIG`: set this environment variable to completely
disable *both* the system and user configuration directories. Spack will
only consider its own defaults and `site` configuration locations.
And one that allows you to move the default cache location:
* `SPACK_USER_CACHE_PATH`: Override the default path to use for user data
(misc_cache, tests, reports, etc.)
With these settings, if you want to isolate Spack in a CI environment, you can do this:
export SPACK_DISABLE_LOCAL_CONFIG=true
export SPACK_USER_CACHE_PATH=/tmp/spack
This is a stop-gap approach until we have figured out how to deal with
the system and user config scopes more generally, as there are plans to
potentially / eventually get rid of them.
**User config**
Spack is a bit of a pain when you have:
- a shared $HOME folder across different systems.
- multiple Spack versions on the same system.
**System config**
- On shared systems with a versioned programming environment / toolkit,
system administrators want to provide config for each version (e.g.
21.09, 21.10) of the programming environment, and the user Spack
instance should be able to pick this up without a steep learning
curve.
- On shared systems the user should be able to opt out of the
hard-coded config scope in /etc/spack, since it may be incompatible
with their particular instance. Currently Spack can only opt out of all
config scopes through overrides with `"config:":`, `"packages:":`, but that
also drops the defaults config, which would have to be repeated, which
is undesirable, especially the lengthy packages.yaml.
An example use case is: having config in this folder:
```
/path/to/programming/environment/{version}/{compilers,packages}.yaml
```
and have `module load spack-system-config` set the variable
```
SPACK_SYSTEM_CONFIG_PATH=/path/to/programming/environment/{version}
```
where the user no longer has to worry about what `{version}` they are
on.
**Continuous integration**
Finally, there is the use case of continuous integration, which may
clone an arbitrary Spack version, which optimally should not pick up
system or user config from the previous run (like may happen in
classical bare metal non-containerized filesystem side effect ridden
jenkins pipelines). In fact this is very similar to how spack itself
tries to avoid picking up system dependencies during builds...
**But environments solve this?**
- You could do `include`s in environment files to get similar behavior
to the spack_system_config_path example, but environments require you
to:
1) require paths to individual config files, not directories.
2) fail if the listed config file does not exist
- They allow you to override config scopes, but this is generally too
rigurous, as it requires you to repeat the default config, in
particular packages.yaml, and just defies the point of layered config.
Co-authored-by: Tom Scogland <tscogland@llnl.gov>
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
Co-authored-by: Steve Leak <sleak@lbl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
* allow no arch-dir modules
* add tests for modules with no arch
* document arch-specific module roots
|
|
Any spec satisfying a default will be symlinked to `default`
If multiple specs have modulefiles in the same directory and satisfy
configured module defaults, then whichever was written last will be
default.
|
|
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
images:
os: amazonlinux:2
spack:
ref: develop
resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.
Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases
|
|
* cuda: add 11.4.1, 11.4.2, 11.5.0.
Note that the curses dependency from cuda-gdb was dropped in 11.4.0.
* Update clang/gcc constraints.
* Address review, assume clang 12 is OK from 11.4.1 onwards.
* superlu-dist@7.1.0 conflicts with cuda@11.5.0.
* Update var/spack/repos/builtin/packages/superlu-dist/package.py
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
|
|
1. Currently it prints not just the spec name, but the dependencies +
their variants + their compilers + their architectures + ...
2. It's clear from the context what spec the message applies to, so,
let's not print the spec at all.
|
|
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
|
|
These three rules in `concretize.lp` are overly complex:
```prolog
:- not provider(Package, Virtual),
provides_virtual(Package, Virtual),
virtual_node(Virtual).
```
```prolog
:- provides_virtual(Package, V1), provides_virtual(Package, V2), V1 != V2,
provider(Package, V1), not provider(Package, V2),
virtual_node(V1), virtual_node(V2).
```
```prolog
provider(Package, Virtual) :- root(Package), provides_virtual(Package, Virtual).
```
and they can be simplified to just:
```prolog
provider(Package, Virtual) :- node(Package), provides_virtual(Package, Virtual).
```
- [x] simplify virtual rules to just one implication
- [x] rename `provides_virtual` to `virtual_condition_holds`
|
|
|
|
fixes #26866
This semantics fits with the way Spack currently treats providers of
virtual dependencies. It needs to be revisited when #15569 is reworked
with a new syntax.
|
|
|
|
* py-vermin: add latest version 1.3.1
* Exclude line from Vermin since version is already being checked for
Vermin 1.3.1 finds that `encoding` kwarg of builtin `open()` requires Python 3+.
|
|
|
|
The OS should only interpret shebangs, if a file is executable.
Thus, there should be no need to modify files where no execute bit is set.
This solves issues that are e.g. encountered while packaging software as
COVISE (https://github.com/hlrs-vis/covise), which includes example data
in Tecplot format. The sbang post-install hook is applied to every installed
file that starts with the two characters #!, but this fails on the binary Tecplot
files, as they happen to start with #!TDV. Decoding them with UTF-8 fails
and an exception is thrown during post_install.
Co-authored-by: Martin Aumüller <aumuell@reserv.at>
|
|
Backports the relevant bits of https://github.com/pytest-dev/py/commit/0f77b6e66f79c07dbb657e2b4a7e94263eacc01b
|
|
This commit contains changes to support Google Cloud Storage
buckets as mirrors, meant for hosting Spack build-caches. This
feature is beneficial for folks that are running infrastructure on
Google Cloud Platform. On public cloud systems, resources are
ephemeral and in many cases, installing compilers, MPI flavors,
and user packages from scratch takes up considerable time.
Giving users the ability to host a Spack mirror that can store build
caches in GCS buckets offers a clean solution for reducing
application rebuilds for Google Cloud infrastructure.
Co-authored-by: Joe Schoonover <joe@fluidnumerics.com>
|
|
current env (#26654)
|
|
|
|
With this commit stacktraces of subprocesses are shown only if debug mode is active
|
|
|
|
|
|
* Update cray architecture detection for milan
Update the cray architecture module table with x86-milan -> zen3
Make cray architecture more robust to back off from frontend
architecture to a recent ancestor if necessary. This should make
future cray updates less paingful for users.
Co-authored-by: Gregory Becker <becker33.llnl.gov>
Co-authored-by: Todd Gamblin <gamblin2@llnl.gov>
|
|
1. Don't use 16 digits of precision for the seconds, round to 2 digits after the comma
2. Don't print if we don't concretize (i.e. `spack concretize` without `-f` doesn't have to tell me it did nothing in `0.00` seconds)
|
|
* Speed-up environment concretization with a process pool
We can exploit the fact that the environment is concretized
separately and use a pool of processes to concretize it.
* Add module spack.util.parallel
Module includes `pool` and `parallel_map` abstractions,
along with implementation details for both.
* Add a new hash type to pass specs across processes
* Add tty msg with concretization time
|
|
|
|
- [x] Stage already concretized specs instead of abstract ones
- [x] Reduce number of network calls by reading naughty list up front
|
|
We use POSIX `patch` to apply patches to files when building, but
`patch` by default prompts the user when it looks like a patch
has already been applied. This means that:
1. If a patch lands in upstream and we don't disable it
in a package, the build will start failing.
2. `spack develop` builds (which keep the stage around) will
fail the second time you try to use them.
To avoid that, we can run `patch` with `-N` (also called
`--forward`, but the long option is not in POSIX). `-N` causes
`patch` to just ignore patches that have already been applied.
This *almost* makes `patch` idempotent, except that it returns 1
when it detects already applied patches with `-N`, so we have to
look at the output of the command to see if it's safe to ignore
the error.
- [x] Remove non-POSIX `-s` option from `patch` call
- [x] Add `-N` option to `patch`
- [x] Ignore error status when `patch` returns 1 due to `-N`
- [x] Add tests for applying a patch twice and applying a bad patch
- [x] Tweak `spack.util.executable` so that it saves the error that
*would have been* raised with `fail_on_error=True`. This lets
us easily re-raise it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
|
|
* relocate: call install_name_tool less
* zstd: fix race condition
Multiple times on my mac, trying to install in parallel led to failures
from multiple tasks trying to simultaneously create `$PREFIX/lib`.
* PackageMeta: simplify callback flush
* Relocate: use spack.platforms instead of platform
* Relocate: code improvements
* fix zstd
* Automatically fix rpaths for packages on macOS
* Only change library IDs when the path is already in the rpath
This restores the hardcoded library path for GCC.
* Delete nonexistent rpaths and add more testing
* Relocate: Allow @executable_path and @loader_path
|