Age | Commit message (Collapse) | Author | Files | Lines |
|
fixes #22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
|
|
* Make stage use concrete specs from environment
Same as in https://github.com/spack/spack/pull/21642, the idea is that
we want to easily stage a package that fails to build in a complex
environment. Instead of making the user create a spec by hand (basically
transforming all the rules in the environment manifest into a spec,
defying the purpose of the environment...), use the provided spec as a
filter for the already concretized specs. This also speeds up things,
cause we don't have to reconcretize.
|
|
|
|
* clingo: modify recipe for bootstrapping
Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping
* Remove option that breaks on linux
* Give more hints for the current Python
* Disable CLINGO_BUILD_PY_SHARED for bootstrapping
* bootstrapping: try to detect the current python from std library
This is much faster than calling external executables
* Fix compatibility with Python 2.6
* Give hints on which compiler and OS to use when bootstrapping
This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.
* Use spec_for_current_python to constrain module requirement
|
|
* ASP-based solver: avoid adding values to variants when they're set
fixes #22533
fixes #21911
Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.
* Ensure disjoint sets have a clear semantics for external packages
|
|
fixes #22547
SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.
|
|
|
|
|
|
This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.
|
|
* Replace URL computation in base IntelOneApiPackage class with
defining URLs in component packages (this is expected to be
simpler for now)
* Add component_dir property that all oneAPI component packages must
define. This property names a directory that should exist after
installation completes (useful for making sure the install was
successful) and also defines the search location for the
component's environment update script.
* Add needed dependencies for components (e.g. intel-oneapi-dnn
requires intel-oneapi-tbb). The compilers provided by
intel-oneapi-compilers need some components under certain
circumstances (e.g. when enabling SYCL support) but these were
omitted since the libraries should only be linked when a
dependent package requests that feature
* Remove individual setup_run_environment implementations and use
IntelOneApiPackage superclass method which sources vars.sh
(located in a subdirectory of component_dir)
* Add documentation for IntelOneApiPackge build system
Co-authored-by: Vasily Danilin <vasily.danilin@yandex.ru>
|
|
|
|
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
|
|
|
|
|
|
|
|
A mitigation of a known issue that affects v3.0 is added, see
https://developer.amd.com/wp-content/resources/AOCC-3.0-Install-Guide.pdf
|
|
|
|
|
|
Before this fix, `spack containerize` complains that `centos/7` is invalid
(should have been `centos:7`)
|
|
* unit tests: mark slow tests as "maybeslow"
This commit also removes the "network" marker and
marks every "network" test as "maybeslow". Tests
marked as db are maintained, but they're not slow
anymore.
* GA: require style tests to pass before running unit-tests
* GA: make MacOS unit tests fail fast
* GA: move all unit tests into the same workflow, run style tests as a prerequisite
All the unit tests have been moved into the same workflow so that a single
run of the dorny/paths-filter action can be used to ask for coverage based
on the files that have been changed in a PR. The basic idea is that for PRs
that introduce only changes to packages coverage is not necessary, this
resulting in a faster execution of the tests.
Also, for package only PRs slow unit tests are skipped.
Finally, MacOS and linux unit tests are now conditional on style tests passing
meaning that e.g. we won't waste a MacOS worker if we know that the PR has
flake8 issues.
* Addressed review comments
* Skipping slow tests on MacOS for package only recipes
* QA: make tests on changes correct before merging
|
|
In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.
This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.
We add one special case for this: dependencies of externals.
|
|
We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.
- [x] change asp.py so that this can't happen -- we now only generate
dependency types for possible dependencies.
|
|
This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.
Given some directives in a package like:
```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```
We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:
```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
attr(Name, Arg1) : required_dependency_condition(ID, Name, Arg1);
attr(Name, Arg1, Arg2) : required_dependency_condition(ID, Name, Arg1, Arg2);
attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
dependency_condition(ID, Parent, Dependency);
node(Parent).
```
And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:
```prolog
attr(Name, Arg1, Arg2) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2).
attr(Name, Arg1, Arg2, Arg3) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```
These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.
Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:
```prolog
condition_holds(ID) :-
condition(ID);
attr(Name, A1) : condition_requirement(ID, Name, A1);
attr(Name, A1, A2) : condition_requirement(ID, Name, A1, A2);
attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).
attr(Name, A1) :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```
this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:
```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```
The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:
```prolog
dependency_holds(Package, Dependency, Type) :-
dependency_condition(ID, Package, Dependency),
dependency_type(ID, Type),
condition_holds(ID).
```
Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:
```python
def package_dependencies_rules(self, pkg, tests):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
for cond, dep in sorted(conditions.items()):
condition_id = self.condition(cond, dep.spec, pkg.name) # create a condition and get its id
self.gen.fact(fn.dependency_condition( # associate specifics about the dependency w/the id
condition_id, pkg.name, dep.spec.name
))
# etc.
```
- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals
LocalWords: concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords: virtuals def pkg cond dep fn refactor github py
|
|
* Rewrite relative dev_spec paths internally to absolute paths in case of relocation of the environment file
* Test relative paths for dev_path in environments
* Add a --keep-relative flag to spack env create
This ensures that relative paths of develop paths are not expanded to
absolute paths when initializing the environment in a different location
from the spack.yaml init file.
|
|
* Propagate --test= for environments
* Improve help comment for spack concretize --test flag
* Add tests for --test with environments
|
|
|
|
Currently, regardless of a spec being concrete or not, we validate its variants in `spec_clauses` (part of `SpackSolverSetup`).
This PR skips the check if the spec is concrete.
The reason we want to do this is so that the solver setup class (really, `spec_clauses`) can be used for cases when we just want the logic statements / facts (is that what they are called?) and we don't need to re-validate an already concrete spec. We can't change existing concrete specs, and we have to be able to handle them *even if they violate constraints in the current spack*. This happens in practice if we are doing the validation for a spec produced by a different spack install.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
|
|
This pull request will add the ability for a user to add a configuration argument on the fly, on the command line, e.g.,:
```bash
$ spack -c config:install_tree:root:/path/to/config.yaml -c packages:all:compiler:[gcc] list --help
```
The above command doesn't do anything (I'm just getting help for list) but you can imagine having another root of packages, and updating it on the fly for a command (something I'd like to do in the near future!)
I've moved the logic for config_add that used to be in spack/cmd/config.py into spack/config.py proper, and now both the main.py (where spack commands live) and spack/cmd/config.py use these functions. I only needed spack config add, so I didn't move the others. We can move the others if there are also needed in multiple places.
|
|
Was getting the following error:
```
$ spack test list
==> Error: issubclass() arg 1 must be a class
```
This PR adds a check in `has_test_method` (in case it is re-used elsewhere such as #22097) and ensures a class is passed to the method from `spack test list`.
|
|
This is a workaround for an issue with how "spack install" is invoked from within "spack ci rebuild". The fact that we don't get an exception or even the actual returncode when using the object returned by spack.util.executable.which('spack') to install the target spec means we get no indication of failures about the install command itself. Instead we rely on the subsequent buildcache creation failure to fail the job.
|
|
|
|
Unlike the other commands of the `R CMD` interface, the `INSTALL` command
will read `Renviron` files. This can potentially break builds of r-
packages, depending on what is set in the `Renviron` file. This PR adds
the `--vanilla` flag to ensure that neither `Rprofile` nor `Renviron` files
are read during Spack builds of r- packages.
|
|
This adds a `--path` option to `spack python` that shows the `python`
interpreter that Spack is using.
e.g.:
```console
$ spack python --path
/Users/gamblin2/src/spack/var/spack/environments/default/.spack-env/view/bin/python
```
This is useful for debugging, and we can ask users to run it to
understand what python Spack is picking up via preferences in `bin/spack`
and via the `SPACK_PYTHON` environment variable introduced in #21222.
|
|
`spack test list` will show you which *installed* packages can be tested
but it won't show you which packages have tests.
- [x] add `spack test list --all` to show which packages have test methods
- [x] update `has_test_method()` to handle package instances *and*
package classes.
|
|
* Improve R package creation
This PR adds the `list_url` attribute to CRAN R packages when using
`spack create`. It also adds the `git` attribute to R Bioconductor
packages upon creation.
* Switch over to using cran/bioc attributes
The cran/bioc entries are set to have the '=' line up with homepage
entry, but homepage does not need to exist in the package file. If it
does not, that could affect the alignment.
* Do not have to split bioc
* Edit R package documentation
Explain Bioconductor packages and add `cran` and `bioc` attributes.
* Update lib/spack/docs/build_systems/rpackage.rst
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update lib/spack/docs/build_systems/rpackage.rst
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Simplify the cran attribute
The version can be faked so that the cran attribute is simply equal to
the CRAN package name.
* Edit the docs to reflect new `cran` attribute format
* Use the first element of self.versions() for url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
This allows users to use relative paths for mirrors and repos and other things that may be part of a Spack environment. There are two ways to do it.
1. Relative to the file
```yaml
spack:
repos:
- local_dir/my_repository
```
Which will refer to a repository like this in the directory where `spack.yaml` lives:
```
env/
spack.yaml <-- the config file above
local_dir/
my_repository/ <-- this repository
repo.yaml
packages/
```
2. Relative to the environment
```yaml
spack:
repos:
- $env/local_dir/my_repository
```
Both of these would refer to the same directory, but they differ for included files. For example, if you had this layout:
```
env/
spack.yaml
repository/
includes/
repos.yaml
repository/
```
And this `spack.yaml`:
```yaml
spack:
include: includes/repos.yaml
```
Then, these two `repos.yaml` files are functionally different:
```yaml
repos:
- $env/repository # refers to env/repository/ above
repos:
- repository # refers to env/includes/repository/ above
```
The $env variable will not be evaluated if there is no active environment. This generally means that it should not be used outside of an environment's spack.yaml file. However, if other aspects of your workflow guarantee that there is always an active environment, it may be used in other config scopes.
|
|
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
|
|
(#21524)
|
|
|
|
If a user creates a wrapper for the ifx binary called ifx_orig,
this causes the ifx --version command to produce:
$ ifx --version
ifx_orig (IFORT) 2021.1 Beta 20201113
Copyright (C) 1985-2020 Intel Corporation. All rights reserved.
The regex for ifx currently expects the output to begin with
"ifx (IFORT)..." so the wrapper would not be detected as ifx. This
PR removes the need for the static "ifx" string which allows wrappers
to be detected as ifx.
In general, the Intel compiler regexes do not include the invoked
executable name (i.e., ifort, icc, icx, etc.), so this is not
expected to cause any issues.
|
|
* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
if an environment is active and if yes we fetch all
uninstalled specs.
|
|
|
|
When using an external package with the old concretizer, all
dependencies of that external package were severed. This was not
performed bidirectionally though, so for an external package W with
a dependency on Z, if some other package Y depended on Z, Z could
still pull properties (e.g. compiler) from W since it was not
severed as a parent dependency.
This performs the severing bidirectionally, and adds tests to
confirm expected behavior when using config from DAG-adjacent
packages during concretization.
|
|
This allows for quickly configuring a spack install/env to use upstream packages by default. This is particularly important when upstreaming from a set of officially supported spack installs on a production cluster. By configuring such that package preferences match the upstream, you ensure maximal reuse of existing package installations.
|
|
Fixes for gitlab pipelines
* Remove accidentally retained testing branch name
* Generate pipeline w/out debug mode
* Make jobs interruptible for auto-cancel pending
* Work around concretization conflicts
|
|
* Support clingo when used with cffi
Clingo recently merged in a new Python module option based on cffi.
Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.
manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi
|
|
* Spec.splice feature
Construct a new spec with a dependency swapped out. Currently can only swap dependencies of the same name, and can only apply to concrete specs.
This feature is not yet attached to any install functionality, but will eventually allow us to "rewire" a package to depend on a different set of dependencies.
Docstring is reformatted for git below
Splices dependency "other" into this ("target") Spec, and return the result as a concrete Spec.
If transitive, then other and its dependencies will be extrapolated to a list of Specs and spliced in accordingly.
For example, let there exist a dependency graph as follows:
T
| \
Z<-H
In this example, Spec T depends on H and Z, and H also depends on Z.
Suppose, however, that we wish to use a differently-built H, known as H'. This function will splice in the new H' in one of two ways:
1. transitively, where H' depends on the Z' it was built with, and the new T* also directly depends on this new Z', or
2. intransitively, where the new T* and H' both depend on the original Z.
Since the Spec returned by this splicing function is no longer deployed the same way it was built, any such changes are tracked by setting the build_spec to point to the corresponding dependency from the original Spec.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
|
|
If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.
This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.
This makes a similar change to spack cd.
If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).
Co-authored-by: Gregory Becker <becker33@llnl.gov>
|
|
|