summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2020-11-17concretizer: simplify and suppress warnings for variant handlingTodd Gamblin2-8/+19
We are relying on default logic in the variant handling in that we set a default value if we never see `variant_set(P, V, X)`. - Move the logic for this into `concretize.lp` instead of generating it for every package. - For programs that don't have explicit variant settings, clingo warns that variant_set(P, V, X) doesn't appear in any rule head, because a setting is never generated. - Specifically suppress this warning.
2020-11-17concretizer: split long lines in ASP programsTodd Gamblin1-1/+8
2020-11-17concretizer: split main logic program out into filesTodd Gamblin3-56/+57
- Add `concretize.lp` and `display.lp` as independent files - Dump them instead of embedded strings
2020-11-17concretizer: colorize ASP outputTodd Gamblin1-7/+46
2020-11-17concretizer: move dump logic into solver.aspTodd Gamblin2-21/+25
- moving the dump logic into spack.solver.asp.solve() allows us to print out useful debug info sooner - prior approach required a successful solve to print out anyhting.
2020-11-17concretizer: first rudimentary round-trip with asp-based solverTodd Gamblin3-179/+382
2020-11-17concretizer: add rudimentary variants with defaults to ASP solveTodd Gamblin1-0/+30
2020-11-17concretizer: beginnings of solve() commandTodd Gamblin4-0/+338
- `spack solve` command outputs a really basic ASP program that handles unconditional dependencies, architecture and versions - doesn't yet handle conflicts, picking latest versions, preferred versions, compilers, etc. - doesn't handle variants
2020-11-17repo: Add all_package_classes() method.Todd Gamblin2-0/+26
- We were able to get names and instances previously - Add a convenience function to get package classes
2020-11-17include share/pkgconfig in user environments (#19909)Robert Underwood1-0/+1
According to the documentation for spack and pkg-config, $view/share/pkgconfig should also be a valid place to look for package config files. This commit ensures that when spack activate env $dir is called, the environment has this directory in PKG_CONFIG_PATH.
2020-11-17Support parallel environment builds (#18131)Tamara Dahlgren16-556/+1200
As of #13100, Spack installs the dependencies of a _single_ spec in parallel. Environments, when installed, can only get parallelism from each individual spec, as they're installed in order. This PR makes entire environments build in parallel by extending Spack's package installer to accept multiple root specs. The install command and Environment class have been updated to use the new parallel install method. The specs and kwargs for each *uninstalled* package (when not force-replacing installations) of an environment are collected, passed to the `PackageInstaller`, and processed using a single build queue. This introduces a `BuildRequest` class to track install arguments, and it significantly cleans up the code used to track package ids during installation. Package ids in the build queue are now just DAG hashes as you would expect, Other tasks: - [x] Finish updating the unit tests based on `PackageInstaller`'s use of `BuildRequest` and the associated changes - [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly - [x] Change the `install` command to leverage the new installation process for multiple specs - [x] Change install output messages for external packages, e.g.: `[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>` - [x] Fix incomplete environment install's view setup/update and not confirming all packages are installed (?) - [x] Ensure externally installed package dependencies are properly accounted for in remaining build tasks - [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines) - [x] Add documentation - [x] Resolve multi-compiler environment install issues - [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
2020-11-16spack edit: accept readonly packages (#19949)Wouter Deconinck1-1/+1
2020-11-16pipelines: support testing PRs from forks (#19248)Scott Wittenburg5-108/+237
This change makes improvements to the `spack ci rebuild` command which supports running gitlab pipelines on PRs from forks. Much of this has to do with making sure we can run without the secrets previously required for running gitlab pipelines (e.g signing key, aws credentials, etc). Specific improvements in this PR: Check if spack has precisely one signing key, and use that information as an additional constraint on whether or not we should attempt to sign the binary package we create. Also, if spack does not have at least one public key, add the install option "--no-check-signature" If we are running a pipeline without any profile or environment variables allowing us to push to S3, the pipeline could still successfully create a buildcache in the artifacts and move on. So just print a message and move on if pushing either the buildcache entry or cdash id file to the remote mirror fails. When we attempt to generate a pacakge or gpg key index on an S3 mirror, and there is nothing to index, just print a warning and exit gracefully rather than throw an exception. Support the use of PR-specific mirrors for temporary binary pkg storage. This will allow quality-of-life improvement for developers, providing a place to store binaries over the lifetime of a PR, so that they must only wait for packages to rebuild from source when they push a new commit that causes it to be necessary. Replace two-pass install with a single pass and the new option: --require-full-hash-match. Doing this also removes the need to save a copy of the spack.yaml to be copied over the one spack rewrites in between the two spack install passes. Work around a mirror configuration issue caused by using spack.util.executable to do the package installation. * Update pipeline trigger jobs for PRs from forks Moving to PRs from forks relies on external synchronization script pushing special branch names. Also secrets will only live on the spack mirror project, and must be propagated to the E4S project via variables on the trigger jobs. When this change is merged, pipelines will not run until we update the "Custom CI configuration path" in the Gitlab CI Settings, as the name of the file has changed to better reflect its purpose. * Arg to MirrorCollection is used exclusively, so add main remote mirror to it * Compute full hash less frequently * Add tests covering index generation error handling code
2020-11-15macOS: Big Sur reports as either 10.16 or 11.0 (#19900)Adam J. Stewart1-0/+1
2020-11-12move sbang to unpadded install tree root (#19640)Greg Becker6-82/+205
Since #11598 sbang has been installed within the install_tree. This doesn’t play nicely with install_tree padding, since sbang can’t do its job if it is installed in a long path (this is the whole point of sbang). This PR changes the padding specification. Instead of $padding inside paths, we now have a separate `padding:` field in the `install_tree` configuration. Previously, the `install_tree` looked like this: ``` /path/to/opt/spack_padding_padding_padding_padding_padding/ bin/ sbang .spack-db/ ... linux-rhel7-x86_64/ ... ``` ``` This PR updates things to look like this: /path/to/opt/ bin/ sbang spack_padding_padding_padding_padding_padding/ .spack-db/ ... linux-rhel7-x86_64/ ... So padding is added at the start of all install prefixes *within* the unpadded root. The database and all installations still go under the padded root. This ensures that `sbang` is in the shorted possible path while also allowing us to make long paths for relocatable binaries.
2020-11-12Testing: ensure that all packages can be pickled (#19890)Peter Scheibel1-0/+24
As of #18205, all packages must be pickle-able to be installed by Spack. This adds a test to check that each package can be pickled. If any package fails to pickle, the test keeps going and collects the names of all failed packages; it then takes the first one that failed and attempts to re-pickle it, generating the full stack trace for the failed pickle attempt.
2020-11-12MavenPackage: allow additional build args (#19676)Adam J. Stewart2-2/+24
2020-11-12macos: update build process to use spawn instead of fork (#18205)Peter Scheibel25-557/+1041
Spack creates a separate process to do package installation. Different operating systems and Python versions use different methods to create it but up until Python 3.8 both Linux and Mac OS used "fork" (which duplicates process memory, file descriptor table, etc.). Python >= 3.8 on Mac OS prefers creating an entirely new process (referred to as the "spawn" start method) because "fork" was found to cause issues (in other words "spawn" is the default start method used by multiprocessing.Process). Spack was dependent on the particular behavior of fork to replicate process memory and transmit file descriptors. This PR refactors the Spack internals to support starting a child process with the "spawn" method. To achieve this, it makes the following changes: - ensure that the package repository and other global state are transmitted to the child process - ensure that file descriptors are transmitted to the child process in a way that works with multiprocessing and spawn - make all the state needed for the build process and tests picklable (package, stage, etc.) - move a number of locally-defined functions into global scope so that they can be pickled - rework tests where needed to avoid using local functions This PR also reworks sbang tests to work on macOS, where temporary directories are deeper than the Linux sbang limit. We make the limit platform-dependent (macOS supports 512-character shebangs) See: #14102
2020-11-12Pipelines: Compare target family instead of architecture (#19884)Scott Wittenburg2-1/+18
In compiler bootstrapping pipelines, we add an artificial dependency between jobs for packages to be built with a bootstrapped compiler and the job building the compiler. To find the right bootstrapped compiler for each spec, we compared not only the compiler spec to that required by the package spec, but also the architectures of the compiler and package spec. But this prevented us from finding the bootstrapped compiler for a spec in cases where the architecture of the compiler wasn't exactly the same as the spec. For example, a gcc@4.8.5 might have bootstrapped a compiler with haswell as the architecture, while the spec had broadwell. By comparing the families instead of the architecture itself, we know that we can build the zlib for broadwell with the gcc for haswell.
2020-11-11Keep output machine readable using `spack find --format` in an env (#19698)Greg Becker2-2/+4
Currently, full JSON output is the only machine readable option for `spack find` in an environment. `spack find --format` is also designed to be machine readable, but we print extra headers in environments. -[x] don't print headers in `spack find` output when in an environment
2020-11-11fix typo wrt target=graviton (#19865)Satish Balay2-2/+2
* fix typo wrt target=graviton This fixes spack build on aarch64 box * update archspec hash
2020-11-11spack env deactivate/spack unload: demote warning message to debug message ↵Massimiliano Culpo1-4/+4
(#19864)
2020-11-11Restore `spack checksum` verbosity (#19480)Adam J. Stewart1-6/+6
2020-11-10Binary caching: fix buildcache list (multiple invocations) (#19848)Peter Scheibel1-12/+7
When invoking "buildcache list" multiple times, the command was reporting no specs in the cache the second time around. The presence of an up-to-date index was causing the internal representation to be left un-initialized.
2020-11-09tutorial cmd: fix gpg invocation (#19829)Greg Becker1-1/+1
2020-11-09Fix minor typo in function comment (#19804)Adam J. Stewart1-1/+1
2020-11-09[docs] getting_started.rst: fix typo (#19815)Emir İşman1-1/+1
2020-11-09commands: add `spack tutorial` command (#19808)Todd Gamblin1-0/+85
Added a command to set up Spack for our tutorial at https://spack-tutorial.readthedocs.io. The command does some common operations we need first-time users to do. Specifically: - checks out a particular branch of Spack - deletes spurious configuration in `~/.spack` that might be left over from prior parts of the tutorial - adds a mirror and trusts its public key
2020-11-05Remove hardcoded version numbers from container logic (#19716)Greg Becker5-74/+7
Previously, we hardcoded a list of Spack versions which could be used by the containerize command. This PR removes that list. It's a maintenance burden when cutting a release, and prevents older versions of Spack from creating containers to be used by newer versions.
2020-11-04bug fix: Display error when curl is missing even in non-debug mode (#19695)Tamara Dahlgren2-2/+32
2020-11-03documentation: fix formatting of code-block section (#19693)Shahzeb Siddiqui1-1/+1
2020-11-02Bugfix - hashing: don't recompute full_hash or build_hash (#19672)Todd Gamblin1-3/+40
There was an error introduced in #19209 where `full_hash()` and `build_hash()` are called on older specs that we've read in from the DB; older specs may not be able to compute these hashes (e.g. if they have removed patches used in computing the full_hash). When serializing a Spec, we want to generate the full/build hash when possible, but we need a mechanism to skip it for Specs that have themselves been read from YAML (and may not support this). To get around this ambiguity and to fix the issue, we: - Add an attribute to the spec called `_hashes_final`, that is `True` if we can't lazily compute `build_hash` and `full_hash`. - Set `_hashes_final` to `False` for new specs (i.e., lazily computing hashes is ok) - Set `_hashes_final` to `True` for concrete specs read in via `from_node_dict`, as it may be too late to recompute hashes. - Compute and write out all hashes in `node_dict_with_hashes` *if possible*. Effectively what this means is that we can round-trip specs that are missing `_build_hash` and `_full_hash` without recomputing them, but for all new specs, we'll compute them and store them. So Spack should work fine with old DBs now.
2020-11-01sbang: fixes for sbang relocationTodd Gamblin3-9/+77
This fixes sbang relocation when using old binary packages, and updates code in `relocate.py`. There are really two places where we would want to handle an `sbang` relocation: 1. Installing an old package that uses `sbang` with shebang lines like `#!/bin/bash $spack_prefix/sbang` 2. Installing a *new* package that uses `sbang` with shebang lines like `#!/bin/sh $install_tree/sbang` The second case is actually handled automatically by our text relocation; we don't need any special relocation logic for new shebangs, as our relocation logic already changes references to the build-time `install_tree` to point to the `install_tree` at intall-time. Case 1 was not properly handled -- we would not take an old binary package and point its shebangs at the new `sbang` location. This PR fixes that and updates the code in `relocation.py` with some notes. There is one more case we don't currently handle: if a binary package is created from an installation in a short prefix that does *not* need `sbang` and is installed to a long prefix that *does* need `sbang`, we won't do anything. We should just patch the file as we would for a normal install. In some upcoming PR we should probably change *all* `sbang` relocation logic to be idempotent and to apply to any sort of shebang'd file. Then we'd only have to worry about which files to `sbang`-ify at install time and wouldn't need to care about these special cases.
2020-10-30Update documentation on containers (#19631)Massimiliano Culpo4-207/+61
fixes #15183 - Moved the container related content from workflows.rst into containers.rst - Deleted the docker_for_developers.rst file, since it describes an outdated procedure Co-authored-by: Axel Huebl <a.huebl@hzdr.de> Co-authored-by: Omar Padron <omar.padron@kitware.com>
2020-10-30Config: cache results of get_config (#19605)Massimiliano Culpo4-12/+49
`config.get_config` now caches the results and returns the same configuration if called multiple times with the same arguments (i.e. the same section and scope). As a consequence, it is expected that users will always call update methods provided in the `config` module after changing the configuration (even if manipulating it as a Python nested dictionary). The following two examples should cover most scenarios: * Most configuration update logic in the core (e.g. relating to adding new compiler) should call `Configuration.update_config` * Tests that need to change the global configuration should use the newly-provided `config.replace_config` function. (if neither of these methods apply, then the essential requirement is to use a method marked as `_config_mutator`) Failure to call such a function after modifying the configuration will lead to unexpected results (e.g. calling `get_config` after changing the configuration will not reflect the changes since the first call to get_config).
2020-10-30Make archspec a vendored dependency (#19600)Massimiliano Culpo50-1565/+1887
- Added archspec to the list of vendored dependencies - Removed every reference to llnl.util.cpu - Removed tests from Spack code base
2020-10-30Binary caching: use full hashes (#19209)Scott Wittenburg16-158/+795
* "spack install" now has a "--require-full-hash-match" option, which forces Spack to skip an available binary package when the full hash doesn't match. Normally only a DAG-hash match is required, which ensures equivalent Specs, but does not account for changing logic inside the associated package. * Add a local binary cache index which tracks specs that have a binary install available in a remote binary cache. It is updated with "spack buildcache list" or for a given spec when a binary package is retrieved for that Spec.
2020-10-29CI: disable vermin check for deprecated hash (#19612)Peter Scheibel1-1/+2
Spack has a fallback for hash checking with m55sums that may not be supported in earlier versions of Python 3.x. The comments in the Spack code acknowledge that this is best effort and may fail, but recent vermin checks (running as part of our CI) reject this. This disables vermin checks for that fallback.
2020-10-29Oneapi add compiler (#19330)Frank Willmore7-3/+164
* enable flatcc to be built with gcc/9.X.X * add static option for building libyogrt * cleanup * Initial working version * rework new oneapi wrappers * tested and removed my initials from source * cleanup * Update __init__.py * remove whitespace * working now with mods for testing, detection. Detection for oneapi is working, but entry needs to be modified to add link path for libimf.so. Cleared cruft for old Intel versions * fixed some formatting * cleanup * flake8 cleanup * flake8 * fixed syntax of compiler version detection tests * fixed syntax of compiler version detection tests modified: detection.py * fix typo * fixes for compilers tests * remove erroneous tests for outdated -std= flags, remove ifx version check (output won't parse) Co-authored-by: Frank Willmore <willmore@anl.gov>
2020-10-28sbang: vendor sbangTodd Gamblin2-66/+0
`sbang` now lives at https://github.com/spack/sbang, and it has its own test suite that's more extensive than what's in Spack. We'll leave sbang tests to sbang from now on, and just vendor `bin/sbang` directly. Remaining `sbang` tests have to do with patching files, not with `sbang`'s functionality. This update also fixes a bug with `sbang` and multiple command line arguments that was introduced in #19529. See: * https://github.com/spack/sbang/pull/1 * https://github.com/spack/sbang/pull/2 - [x] include latest `sbang` from https://github.com/spack/sbang - [x] remove old `sbang` tests from Spack - [x] update `COPYRIGHT` and `cmd/license.py`
2020-10-27sbang: convert sbang script to POSIX shellTodd Gamblin2-7/+75
`sbang` was previously a bash script but did not need to be. This converts it to a plain old POSIX shell script and adds some options. This also allows us to simplify sbang shebangs to `#!/bin/sh /path/to/sbang` instead of `#!/bin/bash /path/to/sbang`. The new script passes shellcheck (with a few exceptions noted in the file) - [x] `SBANG_DEBUG` env var enables printing what *would* be executed - [x] `sbang` checks whether it has been passed an option and fails gracefully - [x] `sbang` will now fail if it can't find a second shebang line, or if the second line happens to be sbang (avoid infinite loops) - [x] add more rigorous tests for `sbang` behavior using `SBANG_DEBUG`
2020-10-26sbang: add support for php (#18299)Toyohisa Kameyama2-0/+22
PHP supports an initial shebang, but its comment syntax can't handle our 2-line shebangs. So, we need to embed the 2nd-line shebang comment to look like a PHP comment: <?php #!/path/to/php ?> This adds patching support to the sbang hook and support for instrumenting php shebangs. This also patches `phar`, which is a tool used to create php packages. `phar` itself has to add sbangs to those packages (as phar archives apparently contain UTF-8, as well as binary blobs), and `phar` sets a checksum based on the contents of the package. Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-10-26sbang: put sbang in the install_tree (#11598)Patrick Gartung3-15/+88
`sbang` is not always accessible to users of packages, e.g., if Spack is installed in someone's home directory and they deploy software for others. Avoid this by: 1. Always installing the `sbang` script in the `install_tree` 2. Relocating binaries to point to the copy in the `install_tree` and not the one in the Spack installation. This PR also: - ensures that `sbang` is reinstalled if it is modified in Spack - adds tests - updates the way `gobject-introspection` patches Makefiles to support `sbang` Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-10-26bugfix: test_push_and_fetch_keys should be skipped w/o gpg (#19511)Todd Gamblin1-0/+2
- [x] add a `@pytest.skipif` decorator
2020-10-24bugfix: fix config merge order for OrderdDicts (#18482)Todd Gamblin5-30/+67
The logic in `config.py` merges lists correctly so that list elements from higher-precedence config files come first, but the way we merge `dict` elements reverses the precedence. Since `mirrors.yaml` relies on `OrderedDict` for precedence, this bug causes mirrors in lower-precedence config scopes to be checked before higher-precedence scopes. We should probably convert `mirrors.yaml` to use a list at some point, but in the meantie here's a fix for `OrderedDict`. - [x] ensuring that keys are ordered correctly in `OrderedDict` by re-inserting keys from the destination `dict` after adding the keys from the source `dict`. - [x] also simplify the logic in `merge_yaml` by always reinserting common keys -- this preserves mark information without all the special cases, and makes it simpler to preserve insertion order. Assuming a default spack configuration, if we run this: ```console $ spack mirror add foo https://bar.com ``` Results before this change: ```console $ spack config blame mirrors --- mirrors: /Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/ /Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com ``` Results after: ```console $ spack config blame mirrors --- mirrors: /Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com /Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/ ```
2020-10-23docs: update docs on shell support and using packages (#19486)Todd Gamblin3-337/+344
Shell integration no longer requires setting `SPACK_ROOT`, so we can simplify the documentation on it. The docs on shell support and using packages are getting a bit old, and information on `spack load` (which seems to be everyone's most common way of using packages) is hard to find. This PR simplifies the shell documentation to remove SPACK_ROOT, and also moves some sections around for clearer organization. - [x] make docs on sourcing setup scripts clearer and simpler - [x] introduce `spack load` early in the basic usage guide instead of burying it in the module docs - [x] clean up module docs so that spack module tcl loads comes later - [x] be clear about the different ways to use packages so that the users can find the docs better. Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2020-10-23csh: don't require SPACK_ROOT for sourcing setup-env.csh (#18225)Todd Gamblin8-80/+69
Don't require SPACK_ROOT for sourcing setup-env.csh and make output more consistent
2020-10-22Add scratch module roots to test configuration (#19477)Massimiliano Culpo4-82/+23
fixes #19476 Module file content is written to file in a temporary location and read back to be analyzed by unit tests. The approach to patch "open" and write to a StringIO in memory has been abandoned, since over time other operations insisting on the filesystem have been added to the module file generator.
2020-10-21tests: increase tolerance of termios tests (#19456)Todd Gamblin1-2/+6
Synchronization on GitHub macOS runners seems to be very slow, and frequently the foreground/background tests fail due to the race this causes. This increases the tolerance for slowness a bit more, to allow up to 4 spurious output lines in the tests. This should hopefully result in no more false negatives on these tests for macOS on GitHub.
2020-10-21Added _poll_lock exception tests (#19446)Tamara Dahlgren1-0/+26