Age | Commit message (Collapse) | Author | Files | Lines |
|
This was EOL November 30th, 2020. I believe the "builds" are failing on
develop because of it.
|
|
|
|
This change make yum usable again on CentOS 6
|
|
|
|
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.
The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
needed (this avoids inadvertently releasing locks that should not be released).
|
|
Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.
CloudFront is 16x faster (or more) than the old bucket.
- [x] change mirror to https://mirror.spack.io
|
|
Co-authored-by: Tiziano Müller <tm@dev-zero.ch>
|
|
|
|
This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:
```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```
This caused clang's version to be misdetected as:
```
clang@11.0.0
Target:
```
This resulted in errors when trying to actually use it as a compiler.
When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:
```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```
Changing the regex to only match until the end of the line fixes these
problems.
Fixes: #19473
|
|
When using Python 3.9.6, Spack is no longer able to fetch anything. Commands like `spack fetch` and `spack install` all break.
Python 3.9.6 includes a [new change](https://github.com/python/cpython/pull/25853/files#diff-b3712475a413ec972134c0260c8f1eb1deefb66184f740ef00c37b4487ef873eR462) that means that `scheme` must be a string, it cannot be None. The solution is to use an empty string like the method default.
Fixes #24644. Also see https://github.com/Homebrew/homebrew-core/pull/80175 where this issue was discovered by CI. Thanks @branchvincent for reporting such a serious issue before any actual users encountered it!
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
|
|
|
|
|
|
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.
When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`). That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations. It's a win now to make `Repo.exists()` check files directly.
**Fix:**
This PR does a number of things to speed up `spack load`, `spack info`, and other commands:
- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
(avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
|
|
fixes #22718
Instead of trying to maximize the number of
matches (preferred behavior), try to minimize
the number of mismatches (unwanted behavior).
|
|
The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).
Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
|
|
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.
This PR thus modifies spack.hook in the following ways:
- Use __import__ instead of spack.util.imp.load_source (this
addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
to stay local
|
|
|
|
|
|
|
|
fixes #22786
Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.
|
|
We remove system paths from search variables like PATH and
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages
may be installed to prefixes that are not actually system paths
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.
|
|
fixes #22871
When in presence of multiple choices for the operating system
we were lacking a rule to derive the node OS if it was
inherited.
|
|
If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.
This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.
This makes a similar change to spack cd.
If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).
Co-authored-by: Gregory Becker <becker33@llnl.gov>
|
|
|
|
fixes #22294
A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.
This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.
|
|
(cherry picked from commit a37c916dff5a5c6e72f939433931ab69dfd731bd)
|
|
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
|
|
* bugfix for active when pkg is already active error
Co-authored-by: Greg Becker <becker33@llnl.gov>
|
|
fixes #22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
|
|
fixes #22596
Variants which are specified in an external spec are not
scored negatively if they encode a non-default value.
|
|
* clingo: modify recipe for bootstrapping
Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping
* Remove option that breaks on linux
* Give more hints for the current Python
* Disable CLINGO_BUILD_PY_SHARED for bootstrapping
* bootstrapping: try to detect the current python from std library
This is much faster than calling external executables
* Fix compatibility with Python 2.6
* Give hints on which compiler and OS to use when bootstrapping
This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.
* Use spec_for_current_python to constrain module requirement
(cherry picked from commit d5fa509b072f0e58f00eaf81c60f32958a9f1e1d)
|
|
* ASP-based solver: avoid adding values to variants when they're set
fixes #22533
fixes #21911
Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.
* Ensure disjoint sets have a clear semantics for external packages
|
|
fixes #22547
SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.
|
|
A search and replace went wrong in 2264e30d99d8b9fbdec8fa69b594e53d8ced15a1.
Thanks to @wadudmiah who reported this issue.
|
|
|
|
In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.
This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.
We add one special case for this: dependencies of externals.
|
|
We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.
- [x] change asp.py so that this can't happen -- we now only generate
dependency types for possible dependencies.
|
|
This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.
Given some directives in a package like:
```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```
We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:
```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
attr(Name, Arg1) : required_dependency_condition(ID, Name, Arg1);
attr(Name, Arg1, Arg2) : required_dependency_condition(ID, Name, Arg1, Arg2);
attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
dependency_condition(ID, Parent, Dependency);
node(Parent).
```
And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:
```prolog
attr(Name, Arg1, Arg2) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2).
attr(Name, Arg1, Arg2, Arg3) :-
dependency_conditions_hold(ID, Package, Dependency),
imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```
These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.
Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:
```prolog
condition_holds(ID) :-
condition(ID);
attr(Name, A1) : condition_requirement(ID, Name, A1);
attr(Name, A1, A2) : condition_requirement(ID, Name, A1, A2);
attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).
attr(Name, A1) :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```
this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:
```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```
The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:
```prolog
dependency_holds(Package, Dependency, Type) :-
dependency_condition(ID, Package, Dependency),
dependency_type(ID, Type),
condition_holds(ID).
```
Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:
```python
def package_dependencies_rules(self, pkg, tests):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
for cond, dep in sorted(conditions.items()):
condition_id = self.condition(cond, dep.spec, pkg.name) # create a condition and get its id
self.gen.fact(fn.dependency_condition( # associate specifics about the dependency w/the id
condition_id, pkg.name, dep.spec.name
))
# etc.
```
- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals
LocalWords: concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords: virtuals def pkg cond dep fn refactor github py
|
|
This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.
(cherry picked from commit 413c422e53843a9e33d7b77a8c44dcfd4bf701be)
|
|
* Allow the bootstrapping of clingo from sources
Allow python builds with system python as external
for MacOS
* Ensure consistent configuration when bootstrapping clingo
This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.
* Github actions: test clingo with bootstrapping from sources
* Add command to inspect and clean the bootstrap store
Prevent users to set the install tree root to the bootstrap store
* clingo: documented how to bootstrap from sources
Co-authored-by: Gregory Becker <becker33@llnl.gov>
(cherry picked from commit 10e9e142b75c6ca8bc61f688260c002201cc1b22)
|
|
(cherry picked from commit 138312efabd534fa42d1a16e172e859f0d2b5842)
|
|
|
|
* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo
package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically
* clingo-bootstrap: apple-clang options to bootstrap statically on darwin
* clingo: fix the path of the Python interpreter
In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.
This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.
* clingo: the commit for "spack" version has been updated.
|
|
|
|
Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.
- [x] make `master` the default so we don't have to keep telling people
to install `clingo@master`. We'll update the preferred version when
there's a new release.
|
|
* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
if an environment is active and if yes we fetch all
uninstalled specs.
|
|
This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.
This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated
(cherry picked from commit 4558dc06e21e01ab07a43737b8cb99d1d69abb5d)
|
|
(cherry picked from commit 61c1b71d38e62a5af81b3b7b8a8d12b954d99f0a)
|
|
The context manager can be used to swap the current
configuration temporarily, for any use case that may need it.
(cherry picked from commit 553d37a6d62b05f15986a702394f67486fa44e0e)
|