Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.
This PR thus modifies spack.hook in the following ways:
- Use __import__ instead of spack.util.imp.load_source (this
addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
to stay local
|
|
Convert configure errors detected by our log scraper into warnings when
the package being installed reports that it was successful.
|
|
Also update the mpileaks unit test to avoid a conflict on CentOS 6
where Dyninst >=11.0.0 no longer builds due to a compiler version
conflict.
|
|
This is as much a question as it is a minor fine-tuning of the docs. I've been known to add things to an environment by editing the `spack.yaml` file directly. When I read the previous version of this sentence, I was afraid that `spack add` was actually doing *two* things, modifying the `spack.yaml` and updating something else that defined the roots of the Environment. A bit of experimentation suggests that editing the `spack.yaml` file is sufficient to change the roots.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
|
|
fixes #22718
Instead of trying to maximize the number of
matches (preferred behavior), try to minimize
the number of mismatches (unwanted behavior).
|
|
fixes #22786
Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.
|
|
|
|
|
|
This isn't a significant issue, but I noticed that the docstring incorrectly references "tty.fail" and I wanted to quickly fix it to reflect the correct command, tty.die. I also wanted to fix the docstrings to not be large clumps, to what @tgamblin suggested after I wrote this - having one line at the top that is a quick summary, and more verbose after that.
|
|
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
|
|
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations. Spack can now contact a monitor server and upload analysis -- even after a build is already done.
Specifically, this adds:
- [x] monitor options for `spack install`
- [x] `spack analyze` command
- [x] hook architecture for analyzers
- [x] separate build logs (in addition to the existing combined log)
- [x] docs for spack analyze
- [x] reworked developer docs, with hook docs
- [x] analyzers for:
- [x] config args
- [x] environment variables
- [x] installed files
- [x] libabigail
There is a lot more information in the docs contained in this PR, so consult those for full details on this feature.
Additional tests will be added in a future PR.
|
|
In debug mode, processes taking an exclusive lock write out their node name to
the lock file. We were using `getfqdn()` for this, but it seems to produce
inconsistent results when used from within some github actions containers.
We get this error because getfqdn() seems to return a short name in one place
and a fully qualified name in another:
```
File "/home/runner/work/spack/spack/lib/spack/spack/test/llnl/util/lock.py", line 1211, in p1
assert lock.host == self.host
AssertionError: assert 'fv-az290-764....cloudapp.net' == 'fv-az290-764'
- fv-az290-764.internal.cloudapp.net
+ fv-az290-764
!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!
== 1 failed, 2547 passed, 7 skipped, 22 xfailed, 2 xpassed in 1238.67 seconds ==
```
This seems to stem from https://bugs.python.org/issue5004.
We don't really need to get a fully qualified hostname for debugging, so use
`gethostname()` because its results are more consistent. This seems to fix the
issue.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
|
|
update s3 bucket
update tutorial branch
|
|
* Clarify stub compiler definition in compilers.yaml
* Update explanation of why stub compiler definition is needed
* Add note about required module definition when using Spack-installed
intel-parallel-studio as intel-compiler
* Add suggestion about updating package config preferences based on
choice of variants when installing intel-parallel-studio to avoid
reinstallation
|
|
vtune_amplifier got renamed to vtune_profiler for the 2020+ suite
|
|
We remove system paths from search variables like PATH and
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages
may be installed to prefixes that are not actually system paths
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.
|
|
fixes #22871
When in presence of multiple choices for the operating system
we were lacking a rule to derive the node OS if it was
inherited.
|
|
We set LC_ALL=C to encourage a build process to generate ASCII
output (so our logger daemon can decode it). Most packages
respect this but it appears that intel-oneapi-compilers does
not in some cases (see #22813). This reads the output of the build
process as UTF-8, which still works if the build process respects
LC_ALL=C but also works if the process generates UTF-8 output.
For Python >= 3.7 all files are opened with UTF-8 encoding by
default. Python 2 does not support the encoding argument on
'open', so to support Python 2 the files would have to be
opened in byte mode and explicitly decoded (as a side note,
this would be the only way to handle other encodings without
being informed of them in advance).
|
|
* bugfix: fix representation of null in spack_yaml output
Nulls were previously printed differently by `spack config blame config`
and `spack config get config`. Fix this in the `spack_yaml` dumpers.
* bugfix: `spack config blame` should print all lines of config
`spack config blame` was not printing all lines of configuration because
there were no annotations for empty lines in the YAML dump output. Fix
this by removing empty lines.
|
|
them (#19837)
|
|
|
|
- Use debugoptimized as default build type, just like RelWithDebInfo for cmake
- Do not strip by default, and add a default_library variant which conveniently support both shared and static
|
|
|
|
|
|
* Strip leading slash from S3 key in url_exists()
* Check against a list of known-broken specs in `ci generate`
|
|
|
|
By default, clingo doesn't show any optimization criteria (maximized or
minimized sums) if the set they aggregate is empty. Per the clingo
mailing list, we can get around that by adding, e.g.:
```
#minimize{ 0@2 : #true }.
```
for the 2nd criterion. This forces clingo to print out the criterion but
does not affect the optimization.
This PR adds directives as above for all of our optimization criteria, as
well as facts with descriptions of each criterion,like this:
```
opt_criterion(2, "number of non-default variants")
```
We use facts in `concretize.lp` rather than hard-coding these in `asp.py`
so that the names can be maintained in the same place as the other
optimization criteria.
The now-displayed weights and the names are used to display optimization
output like this:
```console
(spackle):solver> spack solve --show opt zlib
==> Best of 0 answers.
==> Optimization Criteria:
Priority Criterion Value
1 version weight 0
2 number of non-default variants (roots) 0
3 multi-valued variants + preferred providers for roots 0
4 number of non-default variants (non-roots) 0
5 number of non-default providers (non-roots) 0
6 count of non-root multi-valued variants 0
7 compiler matches + number of nodes 1
8 version badness 0
9 non-preferred compilers 0
10 target matches 0
11 non-preferred targets 0
zlib@1.2.11%apple-clang@12.0.0+optimize+pic+shared arch=darwin-catalina-skylake
```
Note that this is all hidden behind a `--show opt` option to `spack
solve`. Optimization weights are no longer shown by default, but you can
at least inspect them and more easily understand what is going on.
- [x] always show optimization criteria in `clingo` output
- [x] add `opt_criterion()` facts for all optimizationc criteria
- [x] make display of opt criteria optional in `spack solve`
- [x] rework how optimization criteria are displayed, and add a `--show opt`
optiong to `spack solve`
|
|
CachedCMakePackage is a CMakePackage subclass for using CMake initial
cache. This feature of CMake allows packages to increase reproducibility,
especially between spack builds and manual builds. It also allows
packages to sidestep certain parsing bugs in extremely long cmake
commands, and to avoid system limits on the length of the command line.
Co-authored by: Chris White <white238@llnl.gov>
|
|
This reverts commit 7daf5823574dc18522f559a084095714cc9f3fb9.
|
|
In the face of two consecutive spaces in the command line, the compiler wrapper would skip all remaining arguments, causing problems building py-scipy with Intel compiler. This PR solves the problem.
* Fixed compiler wrapper in the face of extra spaces between arguments
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
|
|
Original commit message:
This feature of CMake allows packages to increase reproducibility, especially between
Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in
extremely long ``cmake`` commands, and to avoid system limits on the length of the
command line.
Adding:
Co-authored by: Chris White <white238@llnl.gov>
This reverts commit c4f0a3cf6cd2ab282f6abf20fd72fcb4739a432a.
|
|
This reverts commit 764c17053041a65f684ce565a2598d705b04a16b.
|
|
CachedCMakePackage is a specialized class for packages built using CMake initial cache.
This feature of CMake allows packages to increase reproducibility, especially between
Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in
extremely long ``cmake`` commands, and to avoid system limits on the length of the
command line.
|
|
Autoconf before 2.70 will erroneously pass ifx's -loopopt argument to the
linker, requiring all packages to use autoconf 2.70 or newer to use ifx.
This is a hotfix enabling ifx to be used in Spack. Instead of bothering
to upgrade autoconf for every package, we'll just strip out the
problematic flag if we're in `ld` mode.
- [x] Add a conditional to the `cc` wrapper to skip `-loopopt` in `ld`
mode. This can probably be generalized in the future to strip more
things (e.g., via an environment variable we can constrol from
Spack) but it's good enough for now.
- [x] Add a test ensuring that `-loopopt` arguments are stripped in link
mode, but not in compile mode.
|
|
Since `lazy_lexicographic_ordering` handles `None` comparison for us, we
don't need to adjust the spec comparators to return empty strings or
other type-specific empty types. We can just leverage the None-awareness
of `lazy_lexicographic_ordering`.
- [x] remove "or ''" from `_cmp_iter` in `Spec`
- [x] remove setting of `self.namespace` to `''` in `MockPackage`
|
|
We have been using the `@llnl.util.lang.key_ordering` decorator for specs
and most of their components. This leverages the fact that in Python,
tuple comparison is lexicographic. It allows you to implement a
`_cmp_key` method on your class, and have `__eq__`, `__lt__`, etc.
implemented automatically using that key. For example, you might use
tuple keys to implement comparison, e.g.:
```python
class Widget:
# author implements this
def _cmp_key(self):
return (
self.a,
self.b,
(self.c, self.d),
self.e
)
# operators are generated by @key_ordering
def __eq__(self, other):
return self._cmp_key() == other._cmp_key()
def __lt__(self):
return self._cmp_key() < other._cmp_key()
# etc.
```
The issue there for simple comparators is that we have to bulid the
tuples *and* we have to generate all the values in them up front. When
implementing comparisons for large data structures, this can be costly.
This PR replaces `@key_ordering` with a new decorator,
`@lazy_lexicographic_ordering`. Lazy lexicographic comparison maps the
tuple comparison shown above to generator functions. Instead of comparing
based on pre-constructed tuple keys, users of this decorator can compare
using elements from a generator. So, you'd write:
```python
@lazy_lexicographic_ordering
class Widget:
def _cmp_iter(self):
yield a
yield b
def cd_fun():
yield c
yield d
yield cd_fun
yield e
# operators are added by decorator (but are a bit more complex)
There are no tuples that have to be pre-constructed, and the generator
does not have to complete. Instead of tuples, we simply make functions
that lazily yield what would've been in the tuple. If a yielded value is
a `callable`, the comparison functions will call it and recursively
compar it. The comparator just walks the data structure like you'd expect
it to.
The ``@lazy_lexicographic_ordering`` decorator handles the details of
implementing comparison operators, and the ``Widget`` implementor only
has to worry about writing ``_cmp_iter``, and making sure the elements in
it are also comparable.
Using this PR shaves another 1.5 sec off the runtime of `spack buildcache
list`, and it also speeds up Spec comparison by about 30%. The runtime
improvement comes mostly from *not* calling `hash()` `_cmp_iter()`.
|
|
|
|
* Make -j flag less exceptional
The -j flag in spack behaves differently from make, ctest, ninja, etc,
because it caps the number of jobs to an arbitrary number 16.
Spack will behave like other tools if `spack install` uses a reasonable
default, and `spack install -j <num>` *overrides* that default.
This will be particularly useful for Spack usage outside of a traditional
HPC context and for HPC centers that encourage users to compile on
login nodes with many cores instead of on compute nodes, which has
become increasingly common as individual nodes have more cores.
This maintains the existing default value of min(num_cpus, 16). However,
as it is right now, Spack does a poor job at determining the number of
cpus on linux, since it doesn't take cgroups into account. This is
particularly problematic when using distributed builds with slurm. This PR
also introduces `spack.util.cpus.cpus_available()` to consolidate
knowledge on determining the number of available cores, and improves
core detection for linux. This should also improve core detection for Docker/
Kubernetes, which also use cgroups.
|
|
This commit extends the API of the __call__ method of the
SpackCommand class to permit passing global arguments
like those interposed between the main "spack" command
and the subsequent subcommand.
The functionality is used to fix an issue where running
```spack -e . location -b some_package```
ends up printing the name of the environment instead of
the build directory of the package, because the location arg
parser also stores this value as `arg.env`.
|
|
fixes #22294
A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.
This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.
|
|
|
|
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
|
|
Remote buildcache indices need to be stored in a place that does not
require writing to the Spack prefix. Move them from the install_tree to
the misc_cache.
|
|
* bugfix for active when pkg is already active error
Co-authored-by: Greg Becker <becker33@llnl.gov>
|
|
fixes #22596
Variants which are specified in an external spec are not
scored negatively if they encode a non-default value.
|
|
fixes #22565
This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.
Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.
This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.
|
|
* Make stage use concrete specs from environment
Same as in https://github.com/spack/spack/pull/21642, the idea is that
we want to easily stage a package that fails to build in a complex
environment. Instead of making the user create a spec by hand (basically
transforming all the rules in the environment manifest into a spec,
defying the purpose of the environment...), use the provided spec as a
filter for the already concretized specs. This also speeds up things,
cause we don't have to reconcretize.
|