Age | Commit message (Collapse) | Author | Files | Lines |
|
We can speed-up the computation by avoiding a
double loop in a cardinality constraint and
enforcing the rule instead as an integrity
constraint.
|
|
|
|
|
|
|
|
|
|
Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.
This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.
The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.
This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.
|
|
There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.
|
|
|
|
`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.
|
|
* asp: memoize the list of all target_specs to speed-up setup phase
* asp: memoize using a cache per solver object
|
|
This PR addresses a number of issues related to compiler bootstrapping.
Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.
2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec. Allow them to build with less
aggressive target optimization settings.
3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly. Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.
This PR also:
- adds testing
- improves concretizer handling of target ranges
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
|
|
refers #20292
Added a unit test that ensures we can reuse installed
packages even if in the repository variants have been
removed or added.
|
|
bug only relevant for python3
|
|
Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`. The rules look like this:
```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```
So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.
We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.
We can replace rules like the ones above with facts like this:
```
version_satisfies("python", "2.6:", "2.4")
```
And ground them in `concretize.lp` with rules like this:
```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
:- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
:- version(Package, Version), version_satisfies(Package, Constraint, Version).
```
The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.
- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed
|
|
|
|
I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.
So far, the command line is faster than running through Python, but I'm
working on fixing that. I found that if I do this:
```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp") # code from spack solve --show asp hdf5
control.load("display.lp")
control.ground([("base", [])])
control.solve(...)
```
It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.
So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.
|
|
|
|
Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.
This modification ensures that open-ended variants (variants accepting any string
or any integer) are projected to the finite set of values that are relevant for this
concretization.
|
|
|
|
Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.
The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.
- [x] move ad-hoc value handling code into spec_clauses so we do it in
one place for CLI and packages
- [x] move handling of `variant_possible_value`, etc. into
`concretize.lp`, where we can automatically infer variant existence
more concisely.
- [x] simplify/clarify some of the code for variants in `spec_clauses()`
|
|
Registering external versions among the lists of allowed ones
generates the correct rules for `version_satisfies`
|
|
|
|
PR #15702 changed the invocation of the report context when installing
specs, do the same when building environments.
|
|
|
|
As was done in the old concretizer. Fixes an issue where conditionally
patched dependencies did not show up in spec (gdal+jasper)
|
|
fixes #20055
Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.
This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.
|
|
|
|
When all versions were allowed a version_satisfies rule was not emitted,
and this caused conditional directives to fail.
|
|
|
|
fixes #20040
Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.
|
|
refers #20040
Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.
This prevents the solver from avoiding expected configurations
just because they contain directives like:
depends_on('pkg+foo')
and `+foo` is not the default variant value for pkg.
|
|
|
|
fixes #19981
This commit adds support for target ranges in directives,
for instance:
conflicts('+foo', when='target=x86_64:,aarch64:')
If any target in a spec body is not a known target the
following clause will be emitted:
node_target_satisfies(Package, TargetConstraint)
when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.
|
|
fixes #20019
Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:
conflicts('%gcc@X.Y:', when='@:A.B')
where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.
|
|
(#20082)
refers #20079
Added docstrings to 'concretize' and 'concretized' to
document the format for tests.
Added tests for the activation of test dependencies.
|
|
refers #20040
This modification emits rules like:
provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").
for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.
|
|
|
|
Big Sur versions go 11.0, 11.0.1, 11.1 (vs. prior versions that
only used the minor component)
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
|
|
* create HipPackage base class and do some refactoring
* comments and added conflict to raja for openmp with hip
|
|
Follow-up to #17110
### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `py-xgboost@0.90`.
An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.
|
|
The fixture was introduced in #19690 maybe accidentally.
It's not used in unit tests, and though it should be
mutable it seems an exact copy of it's immutable version.
|
|
Before this change, in pipeline environments where runners do not have access
to persistent shared file-system storage, the only way to pass buildcaches to
dependents in later stages was by using the "enable-artifacts-buildcache" flag
in the gitlab-ci section of the spack.yaml. This change supports a second
mechanism, named "temporary-storage-url-prefix", which can be provided instead
of the "enable-artifacts-buildcache" feature, but the two cannot be used at the
same time. If this prefix is provided (only "file://" and "s3://" urls are
supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url
for a mirror where pipeline jobs will write buildcache entries for use by jobs
in subsequent stages. If this prefix is provided, a cleanup job will be
generated to run after all the rebuild jobs have finished that will delete the
contents of the temporary mirror. To support this behavior a new mirror
sub-command has been added: "spack mirror destroy" which can take either a
mirror name or url.
This change also fixes a bug in generation of "needs" list for each job. Each
jobs "needs" list is supposed to only contain direct dependencies for scheduling
purposes, unless "enable-artifacts-buildcache" is specified. Only in that case
are the needs lists supposed to contain all transitive dependencies. This
changes fixes a bug that caused the needs lists to always contain all transitive
dependencies, regardless of whether or not "enable-artifacts-buildcache" was
specified.
|
|
|
|
Pipelines: DAG pruning
During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors. By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline. To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument. To speed up the pipeline generation process, pass the --check-index-only argument. This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors. The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.
This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml. Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set. Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
|
|
|
|
The SPACK_PYTHON environment variable can be set to a python interpreter to be
used by the spack command. This allows the spack command itself to use a
consistent and separate interpreter from whatever python might be used for package
building.
|
|
Modifications:
- Make use of SpackCommand objects wherever possible
- Deduplicated code when possible
- Moved cleaning of mirrors to fixtures
- Ensure mock configuration has a clear initialization order
|
|
`query()` calls `datetime.datetime.fromtimestamp` regardless of whether a
date query is being done. Guard this with an if statement to avoid the
unnecessary work.
|
|
|