summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2020-11-17concretizer bugfix: *at most* one provider for any virtualTodd Gamblin1-2/+2
2020-11-17concretizer: optimized for preferred virtuals before recent versionsTodd Gamblin2-5/+5
2020-11-17concretizer: handle compiler preferences with optimizationTodd Gamblin4-73/+126
- [x] Add support for packages.yaml and command-line compiler preferences. - [x] Rework compiler version propagation to use optimization rather than hard logic constraints
2020-11-17concretizer: deterministic order for asp output for better diffsTodd Gamblin1-14/+14
Technically the ASP output order does not matter, but it's hard to diff two different solve fomulations unless we order it. - [x] make sure ASP output is emitted in a deterministic order (by sorting all hash keys)
2020-11-17concretizer: rename --dump to --showTodd Gamblin1-8/+8
2020-11-17concretizer: handle package namespacesTodd Gamblin1-0/+7
2020-11-17concretizer: handle constraints on dependencies, adjust optimizationTodd Gamblin3-10/+49
This needs more thought, as I am pretty sure the weights are not correct. Or, at least, I'm not convinced that they do what we want in all cases. See note in concretize.lp.
2020-11-17concretizer: handle dependency typesTodd Gamblin5-17/+42
2020-11-17concretizer: prioritize versions by package pref, newest, preferred, actualTodd Gamblin2-2/+53
Solver now prefers newer versions like the old concretizer. Prefer package preferences from packages.yaml, preferred=True, package definition, and finally each version itself.
2020-11-17concretizer: Use "competition" output format to avoid extra parsingTodd Gamblin2-18/+45
Competition output only prints out one model, so we do not have to unnecessarily parse all the non-optimal models. We'll just look at the best model and bring that in. In practice, this saves a lot of JSON parsing and spec construction time.
2020-11-17concretizer: handle virtual provider preferences from packages.yamlTodd Gamblin2-11/+83
2020-11-17concretizer: use clingo json output instead of textTodd Gamblin2-69/+85
Clingo actually has an option to output JSON -- use that instead of parsing the raw otuput ourselves. This also allows us to pick the best answer -- modify the parser to *only* construct a spec for that one rather than building all of them like we did before.
2020-11-17concretizer: require only one provider for any virtual in the DAGTodd Gamblin2-4/+8
2020-11-17concretizer: handle variant defaults with optimizationTodd Gamblin2-9/+49
- Instead of using default logic, handle variant defaults by minimizing the number of non-default variants in the solution. - This actually seems to be pretty fast, and it fixes the long-standing issue that writing this: spack install hdf5 ^mpich will fail if you don't specify hdf5+mpi. With optimization and allowing enums to be enumerated, the solver seems to be able to quickly discover that +mpi is the only way hdf5 can depend on mpich, and it forces the switch to be thrown.
2020-11-17concretizer: support conditional dependenciesTodd Gamblin2-16/+63
2020-11-17variants: allow MultiValuedVariants to be constructed incrementallyTodd Gamblin2-5/+18
2020-11-17concretizer: initial support for virtual dependenciesTodd Gamblin2-6/+18
Add initial support for virtual dependencies. Solver now knows about all virtuals and can choose one to resolve a dependency.
2020-11-17concretizer: print out virtualsTodd Gamblin2-2/+12
2020-11-17concretizer: handle versions with choice construct rather than conflictsTodd Gamblin1-40/+95
Use '1 { version(x); version(y); version(z) } 1.' instead of declaring conflicts for non-matching versions. This keeps the sense of version clauses positive, which will allow them to be used more easily in conditionals later. Also refactor `spec_clauses()` method to return clauses that can be used in conditions, etc. instead of just printing out facts.
2020-11-17concretizer: add another definition pragma.Todd Gamblin1-0/+1
- single_value_variant may not be defined by the generated program. Mark it to avoid warnings.
2020-11-17concretizer: cleanupTodd Gamblin1-14/+17
2020-11-17concretizer: use conditional literals for versions.Todd Gamblin2-24/+31
2020-11-17concretizer: mark depends_on/2 defined for solves without dependencies.Todd Gamblin1-0/+3
2020-11-17concretizer: add basic semantics for compilersTodd Gamblin3-7/+112
- This handles setting the compiler and falling back to a default compiler, as well as providing default values for compilers/compiler versions. - Versions still aren't quite right -- you can't properly override versions on compiler specs.
2020-11-17concretizer: simplify and move architecture semantics into concretize.lpTodd Gamblin2-15/+41
- Model architecture default settings and propagation off of variants - Leverage ASP default logic to set architecture to default if it's not set otherwise. - Move logic out of Python and into concretize.lp as first-order rules.
2020-11-17concretizer: break output up into easier-to-understand sectionsTodd Gamblin3-22/+37
2020-11-17concretizer: simplify and suppress warnings for variant handlingTodd Gamblin2-8/+19
We are relying on default logic in the variant handling in that we set a default value if we never see `variant_set(P, V, X)`. - Move the logic for this into `concretize.lp` instead of generating it for every package. - For programs that don't have explicit variant settings, clingo warns that variant_set(P, V, X) doesn't appear in any rule head, because a setting is never generated. - Specifically suppress this warning.
2020-11-17concretizer: split long lines in ASP programsTodd Gamblin1-1/+8
2020-11-17concretizer: split main logic program out into filesTodd Gamblin3-56/+57
- Add `concretize.lp` and `display.lp` as independent files - Dump them instead of embedded strings
2020-11-17concretizer: colorize ASP outputTodd Gamblin1-7/+46
2020-11-17concretizer: move dump logic into solver.aspTodd Gamblin2-21/+25
- moving the dump logic into spack.solver.asp.solve() allows us to print out useful debug info sooner - prior approach required a successful solve to print out anyhting.
2020-11-17concretizer: first rudimentary round-trip with asp-based solverTodd Gamblin3-179/+382
2020-11-17concretizer: add rudimentary variants with defaults to ASP solveTodd Gamblin1-0/+30
2020-11-17concretizer: beginnings of solve() commandTodd Gamblin4-0/+338
- `spack solve` command outputs a really basic ASP program that handles unconditional dependencies, architecture and versions - doesn't yet handle conflicts, picking latest versions, preferred versions, compilers, etc. - doesn't handle variants
2020-11-17repo: Add all_package_classes() method.Todd Gamblin2-0/+26
- We were able to get names and instances previously - Add a convenience function to get package classes
2020-11-17include share/pkgconfig in user environments (#19909)Robert Underwood1-0/+1
According to the documentation for spack and pkg-config, $view/share/pkgconfig should also be a valid place to look for package config files. This commit ensures that when spack activate env $dir is called, the environment has this directory in PKG_CONFIG_PATH.
2020-11-17Support parallel environment builds (#18131)Tamara Dahlgren16-556/+1200
As of #13100, Spack installs the dependencies of a _single_ spec in parallel. Environments, when installed, can only get parallelism from each individual spec, as they're installed in order. This PR makes entire environments build in parallel by extending Spack's package installer to accept multiple root specs. The install command and Environment class have been updated to use the new parallel install method. The specs and kwargs for each *uninstalled* package (when not force-replacing installations) of an environment are collected, passed to the `PackageInstaller`, and processed using a single build queue. This introduces a `BuildRequest` class to track install arguments, and it significantly cleans up the code used to track package ids during installation. Package ids in the build queue are now just DAG hashes as you would expect, Other tasks: - [x] Finish updating the unit tests based on `PackageInstaller`'s use of `BuildRequest` and the associated changes - [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly - [x] Change the `install` command to leverage the new installation process for multiple specs - [x] Change install output messages for external packages, e.g.: `[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>` - [x] Fix incomplete environment install's view setup/update and not confirming all packages are installed (?) - [x] Ensure externally installed package dependencies are properly accounted for in remaining build tasks - [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines) - [x] Add documentation - [x] Resolve multi-compiler environment install issues - [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
2020-11-16spack edit: accept readonly packages (#19949)Wouter Deconinck1-1/+1
2020-11-16pipelines: support testing PRs from forks (#19248)Scott Wittenburg5-108/+237
This change makes improvements to the `spack ci rebuild` command which supports running gitlab pipelines on PRs from forks. Much of this has to do with making sure we can run without the secrets previously required for running gitlab pipelines (e.g signing key, aws credentials, etc). Specific improvements in this PR: Check if spack has precisely one signing key, and use that information as an additional constraint on whether or not we should attempt to sign the binary package we create. Also, if spack does not have at least one public key, add the install option "--no-check-signature" If we are running a pipeline without any profile or environment variables allowing us to push to S3, the pipeline could still successfully create a buildcache in the artifacts and move on. So just print a message and move on if pushing either the buildcache entry or cdash id file to the remote mirror fails. When we attempt to generate a pacakge or gpg key index on an S3 mirror, and there is nothing to index, just print a warning and exit gracefully rather than throw an exception. Support the use of PR-specific mirrors for temporary binary pkg storage. This will allow quality-of-life improvement for developers, providing a place to store binaries over the lifetime of a PR, so that they must only wait for packages to rebuild from source when they push a new commit that causes it to be necessary. Replace two-pass install with a single pass and the new option: --require-full-hash-match. Doing this also removes the need to save a copy of the spack.yaml to be copied over the one spack rewrites in between the two spack install passes. Work around a mirror configuration issue caused by using spack.util.executable to do the package installation. * Update pipeline trigger jobs for PRs from forks Moving to PRs from forks relies on external synchronization script pushing special branch names. Also secrets will only live on the spack mirror project, and must be propagated to the E4S project via variables on the trigger jobs. When this change is merged, pipelines will not run until we update the "Custom CI configuration path" in the Gitlab CI Settings, as the name of the file has changed to better reflect its purpose. * Arg to MirrorCollection is used exclusively, so add main remote mirror to it * Compute full hash less frequently * Add tests covering index generation error handling code
2020-11-15macOS: Big Sur reports as either 10.16 or 11.0 (#19900)Adam J. Stewart1-0/+1
2020-11-12move sbang to unpadded install tree root (#19640)Greg Becker6-82/+205
Since #11598 sbang has been installed within the install_tree. This doesn’t play nicely with install_tree padding, since sbang can’t do its job if it is installed in a long path (this is the whole point of sbang). This PR changes the padding specification. Instead of $padding inside paths, we now have a separate `padding:` field in the `install_tree` configuration. Previously, the `install_tree` looked like this: ``` /path/to/opt/spack_padding_padding_padding_padding_padding/ bin/ sbang .spack-db/ ... linux-rhel7-x86_64/ ... ``` ``` This PR updates things to look like this: /path/to/opt/ bin/ sbang spack_padding_padding_padding_padding_padding/ .spack-db/ ... linux-rhel7-x86_64/ ... So padding is added at the start of all install prefixes *within* the unpadded root. The database and all installations still go under the padded root. This ensures that `sbang` is in the shorted possible path while also allowing us to make long paths for relocatable binaries.
2020-11-12Testing: ensure that all packages can be pickled (#19890)Peter Scheibel1-0/+24
As of #18205, all packages must be pickle-able to be installed by Spack. This adds a test to check that each package can be pickled. If any package fails to pickle, the test keeps going and collects the names of all failed packages; it then takes the first one that failed and attempts to re-pickle it, generating the full stack trace for the failed pickle attempt.
2020-11-12MavenPackage: allow additional build args (#19676)Adam J. Stewart2-2/+24
2020-11-12macos: update build process to use spawn instead of fork (#18205)Peter Scheibel25-557/+1041
Spack creates a separate process to do package installation. Different operating systems and Python versions use different methods to create it but up until Python 3.8 both Linux and Mac OS used "fork" (which duplicates process memory, file descriptor table, etc.). Python >= 3.8 on Mac OS prefers creating an entirely new process (referred to as the "spawn" start method) because "fork" was found to cause issues (in other words "spawn" is the default start method used by multiprocessing.Process). Spack was dependent on the particular behavior of fork to replicate process memory and transmit file descriptors. This PR refactors the Spack internals to support starting a child process with the "spawn" method. To achieve this, it makes the following changes: - ensure that the package repository and other global state are transmitted to the child process - ensure that file descriptors are transmitted to the child process in a way that works with multiprocessing and spawn - make all the state needed for the build process and tests picklable (package, stage, etc.) - move a number of locally-defined functions into global scope so that they can be pickled - rework tests where needed to avoid using local functions This PR also reworks sbang tests to work on macOS, where temporary directories are deeper than the Linux sbang limit. We make the limit platform-dependent (macOS supports 512-character shebangs) See: #14102
2020-11-12Pipelines: Compare target family instead of architecture (#19884)Scott Wittenburg2-1/+18
In compiler bootstrapping pipelines, we add an artificial dependency between jobs for packages to be built with a bootstrapped compiler and the job building the compiler. To find the right bootstrapped compiler for each spec, we compared not only the compiler spec to that required by the package spec, but also the architectures of the compiler and package spec. But this prevented us from finding the bootstrapped compiler for a spec in cases where the architecture of the compiler wasn't exactly the same as the spec. For example, a gcc@4.8.5 might have bootstrapped a compiler with haswell as the architecture, while the spec had broadwell. By comparing the families instead of the architecture itself, we know that we can build the zlib for broadwell with the gcc for haswell.
2020-11-11Keep output machine readable using `spack find --format` in an env (#19698)Greg Becker2-2/+4
Currently, full JSON output is the only machine readable option for `spack find` in an environment. `spack find --format` is also designed to be machine readable, but we print extra headers in environments. -[x] don't print headers in `spack find` output when in an environment
2020-11-11fix typo wrt target=graviton (#19865)Satish Balay2-2/+2
* fix typo wrt target=graviton This fixes spack build on aarch64 box * update archspec hash
2020-11-11spack env deactivate/spack unload: demote warning message to debug message ↵Massimiliano Culpo1-4/+4
(#19864)
2020-11-11Restore `spack checksum` verbosity (#19480)Adam J. Stewart1-6/+6
2020-11-10Binary caching: fix buildcache list (multiple invocations) (#19848)Peter Scheibel1-12/+7
When invoking "buildcache list" multiple times, the command was reporting no specs in the cache the second time around. The presence of an up-to-date index was causing the internal representation to be left un-initialized.