Age | Commit message (Collapse) | Author | Files | Lines |
|
Weight microarchitectures and prefers more rercent ones. Also disallow
nodes where the compiler does not support the selected target.
We should revisit this at some point as it seems like if I play around
with the compiler support for different architectures, the solver runs
very slowly. See notes in comments -- the bad case was gcc supporting
broadwell and skylake with clang maxing out at haswell.
|
|
We didn't have a cardinality constraint for multi-valued variants, so the
solver wasn't filling them in.
- [x] add a requirement for at least one value for multi-valued variants
|
|
|
|
|
|
Variants like `cpu_target` on `openblas` don't have defineed values, but
they have a default. Ensure that the default is always a possible value
for the solver.
|
|
Spack was generating the same dependency connstraints twice in the output ASP:
```
declared_dependency("abinit", "hdf5", "link")
:- node("abinit"),
variant_value("abinit", "mpi", "True"),
variant_value("abinit", "mpi", "True").
```
This was because `AspFunction` was modifying itself when called.
- [x] fix `AspFunction` so that every call returns a new object
|
|
|
|
|
|
- [x] Add support for packages.yaml and command-line compiler preferences.
- [x] Rework compiler version propagation to use optimization rather than
hard logic constraints
|
|
Technically the ASP output order does not matter, but it's hard to diff
two different solve fomulations unless we order it.
- [x] make sure ASP output is emitted in a deterministic order (by
sorting all hash keys)
|
|
|
|
|
|
This needs more thought, as I am pretty sure the weights are not correct.
Or, at least, I'm not convinced that they do what we want in all cases.
See note in concretize.lp.
|
|
|
|
Solver now prefers newer versions like the old concretizer. Prefer
package preferences from packages.yaml, preferred=True, package
definition, and finally each version itself.
|
|
Competition output only prints out one model, so we do not have to
unnecessarily parse all the non-optimal models. We'll just look at the
best model and bring that in.
In practice, this saves a lot of JSON parsing and spec construction time.
|
|
|
|
Clingo actually has an option to output JSON -- use that instead of
parsing the raw otuput ourselves.
This also allows us to pick the best answer -- modify the parser to
*only* construct a spec for that one rather than building all of them
like we did before.
|
|
|
|
- Instead of using default logic, handle variant defaults by minimizing
the number of non-default variants in the solution.
- This actually seems to be pretty fast, and it fixes the long-standing
issue that writing this:
spack install hdf5 ^mpich
will fail if you don't specify hdf5+mpi. With optimization and
allowing enums to be enumerated, the solver seems to be able to quickly
discover that +mpi is the only way hdf5 can depend on mpich, and it
forces the switch to be thrown.
|
|
|
|
|
|
Add initial support for virtual dependencies. Solver now knows about all
virtuals and can choose one to resolve a dependency.
|
|
|
|
Use '1 { version(x); version(y); version(z) } 1.' instead of declaring
conflicts for non-matching versions. This keeps the sense of version
clauses positive, which will allow them to be used more easily in
conditionals later.
Also refactor `spec_clauses()` method to return clauses that can be used
in conditions, etc. instead of just printing out facts.
|
|
- single_value_variant may not be defined by the generated program. Mark
it to avoid warnings.
|
|
|
|
|
|
|
|
- This handles setting the compiler and falling back to a default
compiler, as well as providing default values for compilers/compiler
versions.
- Versions still aren't quite right -- you can't properly override
versions on compiler specs.
|
|
- Model architecture default settings and propagation off of variants
- Leverage ASP default logic to set architecture to default if it's not
set otherwise.
- Move logic out of Python and into concretize.lp as first-order rules.
|
|
|
|
We are relying on default logic in the variant handling in that we set a
default value if we never see `variant_set(P, V, X)`.
- Move the logic for this into `concretize.lp` instead of generating it
for every package.
- For programs that don't have explicit variant settings, clingo warns
that variant_set(P, V, X) doesn't appear in any rule head, because a
setting is never generated.
- Specifically suppress this warning.
|
|
|
|
- Add `concretize.lp` and `display.lp` as independent files
- Dump them instead of embedded strings
|
|
|
|
- moving the dump logic into spack.solver.asp.solve() allows us to print
out useful debug info sooner
- prior approach required a successful solve to print out anyhting.
|
|
|
|
|
|
- `spack solve` command outputs a really basic ASP program that handles
unconditional dependencies, architecture and versions
- doesn't yet handle conflicts, picking latest versions, preferred
versions, compilers, etc.
- doesn't handle variants
|
|
- We were able to get names and instances previously
- Add a convenience function to get package classes
|
|
updated sha256sum of download .tgz
|
|
According to the documentation for spack and pkg-config,
$view/share/pkgconfig should also be a valid place to look
for package config files. This commit ensures that when
spack activate env $dir is called, the environment has this
directory in PKG_CONFIG_PATH.
|
|
* clfft: Fix for aarch64
* clfft: Changed the patch application conditions.
* clfft: Changed Patch file
|
|
* Metall: add version 0.2
* Add Metall v0.3
* Update Metall package to v0.4 and v0.5.
* Metall package: add v0.6
* Metall package: add v0.7
|
|
As of #13100, Spack installs the dependencies of a _single_ spec in parallel.
Environments, when installed, can only get parallelism from each individual
spec, as they're installed in order. This PR makes entire environments build
in parallel by extending Spack's package installer to accept multiple root
specs. The install command and Environment class have been updated to use
the new parallel install method.
The specs and kwargs for each *uninstalled* package (when not force-replacing
installations) of an environment are collected, passed to the `PackageInstaller`,
and processed using a single build queue.
This introduces a `BuildRequest` class to track install arguments, and it
significantly cleans up the code used to track package ids during installation.
Package ids in the build queue are now just DAG hashes as you would expect,
Other tasks:
- [x] Finish updating the unit tests based on `PackageInstaller`'s use of
`BuildRequest` and the associated changes
- [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly
- [x] Change the `install` command to leverage the new installation process for multiple specs
- [x] Change install output messages for external packages, e.g.:
`[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>`
- [x] Fix incomplete environment install's view setup/update and not confirming all
packages are installed (?)
- [x] Ensure externally installed package dependencies are properly accounted for in
remaining build tasks
- [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines)
- [x] Add documentation
- [x] Resolve multi-compiler environment install issues
- [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
|
|
|
|
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks. Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc). Specific improvements in this PR:
Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.
Also, if spack does not have at least one public key, add the install
option "--no-check-signature"
If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on. So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.
When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.
Support the use of PR-specific mirrors for temporary binary pkg
storage. This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.
Replace two-pass install with a single pass and the new option:
--require-full-hash-match. Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.
Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.
* Update pipeline trigger jobs for PRs from forks
Moving to PRs from forks relies on external synchronization script
pushing special branch names. Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.
When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.
* Arg to MirrorCollection is used exclusively, so add main remote mirror to it
* Compute full hash less frequently
* Add tests covering index generation error handling code
|
|
* [xerces-c] add netaccessor variant, new version
* [geant4] add xerces-c netaccessor requirement
* [xerces-c] format
|
|
|