Age | Commit message (Collapse) | Author | Files | Lines |
|
* Fix bindist network issues
* Another one using the network
|
|
This commit rework version facts so that:
1. All the information on versions is collected
before emitting the facts
2. The same kind of atom is emitted for versions
stemming from different origins (package.py
vs. packages.yaml)
In the end all the possible versions for a given
package are totally ordered and they are given
different and increasing weights staring from zero.
This refactor allow us to avoid using negative
weights, which in some configurations may make
parent node score "better" and lead to unexpected
"optimal" results.
|
|
|
|
|
|
|
|
Once PR binary graduation is deployed, the shared PR mirror will
contain binaries just built by a merged PR, before the subsequent
develop pipeline has had time to finish. Using the shared PR mirror
as a source of binaries will reduce the number of times we have to
rebuild the same full hash.
|
|
* CI for #25439 was not run on the latest merge commit, and fails after #25470
* Make it consistent
|
|
not depend on spack.cmd. (#25439)
* Refactor active environment getters
- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for
commands that require an environment (rather than abusing
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation
* Python 2: toplevel import errors only with 'as ev'
In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
|
|
|
|
* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
|
|
`markdown` is only supported since py-pygments@2.8.0:, see
https://github.com/pygments/pygments/commit/9647d2ae506b8e05ebabe9243df707bac901a6a3
Let's allow old versions too again.
|
|
Add missing `self.` prefixes when calling `define_from_variant`
|
|
|
|
Spack is internally using a patched version of `argparse` mainly to backport Python 3 functionality
into Python 2. This PR makes it such that for the supported Python 3 versions we use `argparse`
from the standard Python library. This PR has been extracted from #25371 where it was needed
to be able to use recent versions of `pytest`.
* Fixed formatting issues when using a pristine argparse.py
* Fix error message for Python 3.X when missing positional arguments
* Account for the change of API in Python 3.7
* Layout multi-valued args into columns in error messages
* Seamless transition in develop if argparse.pyc is in external
* Be more defensive in case we can't remove the file.
|
|
Add link type to spack.yaml format
Add tests to verify link behavior is correct for installed files
for all three view types
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
* Add to the error message to help determine failure source.
* Break up long line to keep under 80 chars.
Co-authored-by: Rob Groner <rug262@psu.edu>
|
|
The commands have been deprecated in #7098, and have
been failing with an error message since then.
Cleaning the code since it is unlikely that somebody
is still using them.
|
|
Isn't installed on Alpine.
|
|
|
|
|
|
The star was not rendered in the docs.
|
|
Preferred providers had a non-zero weight because in an earlier formulation of the logic program that was needed to prefer external providers over default providers. With the current formulation for externals this is not needed anymore, so we can give a weight of zero to both default choices and providers that are externals. _Using zero ensures that we don't introduce any drift towards having less providers, which was happening when minimizing positive weights_.
Modifications:
- [x] Default weight for providers starts at 0 (instead of 10, needed before to prefer externals)
- [x] Rules to compute the `provider_weight` have been refactored. There are multiple possible weights for a given `Virtual`. Only one gets selected by the solver (the one that minimizes the objective function).
- [x] `provider_weight` are now accounting for each different `Virtual`. Before there was a single weight per provider, even if the package was providing multiple virtuals.
* Give preferred providers a weight of zero
Preferred providers had a non-zero weight because in an earlier
formulation of the logic program that was needed to prefer
external providers over default providers.
With the current formulation for externals this is not needed anymore,
so we can give a weight of zero to default choices. Using zero
ensures that we don't introduce any drift towards having
less providers, which was happening when minimizing positive weights.
* Simplify how we compute weights for providers
Rewrite rules so that specific events (i.e. being
an external) unlock the possibility to use certain
weights. The weight being considered is then selected
by the minimization process to be the one that gives
the best score.
* Allow providers to have different weights for different virtuals
Before this change we didn't differentiate providers based on
the virtual they provide, which meant that packages providing
more than one virtual had nonetheless a single weight.
With this change there will be a weight per virtual.
|
|
* Make spack env activate x idempotent
* Update lib/spack/spack/cmd/env.py
|
|
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:
```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```
Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.
This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.
- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
|
|
`compare_specs()` had a `colorful` keyword argument, but everything else in
spack uses `color` for this.
- [x] rename the argument
- [x] make the default follow spack's `--color=always/never/auto` setting
|
|
17473a08ff merged `v0.16.1` back into `develop` but somehow lost the version bump. Fix it here.
|
|
|
|
Add a workflow to test bootstrapping clingo on
different platforms so that we can detect changes
that break it.
Compute `site_packages_dir` in `bootstrap.py` as it was
before #24095, until we figure a better way to override
that attribute.
|
|
|
|
Long, padded install paths can get to be very long in the verbose install
output. This has to be filtered out by the Executable class, as it
generates these debug messages.
- [x] add ability to filter paths from Executable output.
- [x] add a context manager that can enable path filtering
- [x] make `build_process` in `installer.py`
This should hopefully allow us to see most of the build output in
Gitlab pipeline builds again.
|
|
`build_process` has been around a long time but it's become a very large,
unwieldy method. It's hard to work with because it has a lot of local
variables that need to persist across all of the code.
- [x] To address this, convert it its own `BuildInfoProcess` class.
- [x] Start breaking the method apart by factoring out the main
installation logic into its own function.
|
|
When context managers are used to save and restore values, we need to remember
to use try/finally around the yield in case an exception is thrown. Otherwise,
the cleanup will be skipped.
|
|
This fixes the bad bootstrap test for spack style, and it refines the
asserrtions on other failure cases.
|
|
|
|
|
|
sources (#24960)
|
|
- Change config from the undocumented `use_curl: true/false` to `url_fetch_method: urllib/curl`.
- Documentation of `url_fetch_method` in `defaults/config.yaml`
- Default fetch option explicitly set to `urllib` for users who may not have curl on their system
To upgrade from `use_curl` to `url_fetch_method`, run `spack config update config`
|
|
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.
This makes a few fixes to `spack diff`:
- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`
- [x] Sort the lists of diff properties so that the output is always in the same order.
- [x] Make the output for different fields the same as what we use in the solver. Previously, we
would use `Type(value)` for non-string values and `value` for strings. Now we just use
the value. So the output looks a little cleaner:
```
== Old ========================== == New ====================
@@ node_target @@ @@ node_target @@
- gdbm Target(x86_64) - gdbm x86_64
+ zlib Target(skylake) + zlib skylake
@@ variant_value @@ @@ variant_value @@
- ncurses symlinks bool(False) - ncurses symlinks False
+ zlib optimize bool(True) + zlib optimize True
@@ version @@ @@ version @@
- gdbm Version(1.18.1) - gdbm 1.18.1
+ zlib Version(1.2.11) + zlib 1.2.11
@@ node_os @@ @@ node_os @@
- gdbm catalina - gdbm catalina
+ zlib catalina + zlib catalina
```
I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
|
|
When a spec fails to build on `develop`, instead of storing an empty file as the entry in the broken specs list, this change stores the full spec yaml as well as links to the failing pipeline and job.
|
|
A `spack diff` will take two specs, and then use the spack.solver.asp.SpackSolverSetup to generate
lists of facts about each (e.g., nodes, variants, etc.) and then take a set difference between the
two to show the user the differences.
Example output:
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl Version(1.0.2u)
+ openssl Version(1.1.1k)
- python Version(2.7.8)
+ python Version(3.8.11)
Currently this uses diff-like output but we will attempt to improve on this in the future.
One use case for `spack diff` is whenever a user has a disambiguate situation and cannot
remember how two different installs are different. The command can also output `--json` in
the case of a more analysis type use case where we want to save complete data with all
diffs and the intersection. However, the command is really more intended for a command
line use case, and we likely will have an analyzer more suited to saving data
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
When a develop pipeline successfully finishes a `spack install`, check if
the spec that was just built is on the broken-specs list. If so, remove it.
|
|
* Catch ConnectionError from CDash reporter
Catch ConnectionError when attempting to upload the results of `spack install`
to CDash. This follows in the spirit of #24299. We do not want `spack install`
to exit with a non-zero status when something goes wrong while attempting to
report results to CDash.
* Catch HTTP Error 400 (Bad Request) in relate_cdash_builds()
|
|
`spack style` previously used a Travis CI variable to figure out
what the base branch of a PR was, and this was apparently also set
on `develop`. We switched to `GITHUB_BASE_REF` to support GitHub
Actions, but it looks like this is set to `""` in pushes to develop,
so `spack style` breaks there.
This PR does two things:
- [x] Remove `GITHUB_BASE_REF` knowledge from `spack style` entirely
- [x] Handle `GITHUB_BASE_REF` in style scripts instead, and explicitly
pass the base ref if it is present, but don't otherwise.
This makes `spack style` *not* dependent on the environment and fixes
handling of the base branch in the right place.
|
|
This adds a `--root` option so that `spack style` can check style for
a spack instance other than its own.
We also change the inner workings of `spack style` so that `--config FILE`
(and similar options for the various tools) options are used. This ensures
that when `spack style` runs, it always uses the config from the running spack,
and does *not* pick up configuration from the external root.
- [x] add `--root` option to `spack style`
- [x] add `--config` (or similar) option when invoking style tools
- [x] add a test that verifies we can check an external instance
|
|
|
|
|
|
|
|
Intel oneAPI installs maintain a lock file in XDG_RUNTIME_DIR,
which by default exists in /tmp (and is shared by all component
installs). This prevented multiple oneAPI components from being
installed in parallel. This commit sets XDG_RUNTIME_DIR to exist
within Spack's installation Stage, so allows multiple components
to be installed at the same time.
|
|
This uses our bootstrapping logic to automatically install dependencies for
`spack style`. Users should no longer have to pre-install all of the tools
(`isort`, `mypy`, `black`, `flake8`). The command will do it for them.
- [x] add logic to bootstrap specs with specific version requirements in `spack style`
- [x] remove style tools from CI requirements (to ensure we test bootstrapping)
- [x] rework dependencies for `mypy` and `py-typed-ast`
- `py-typed-ast` needs to be a link dependency
- it needs to be at 1.4.1 or higher to work with python 3.9
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
|
|
* build(deps): bump codecov/codecov-action from 1 to 2.0.1
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 2.0.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v2.0.1)
---
updated-dependencies:
- dependency-name: codecov/codecov-action
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com>
* Update arguments to codecov action
* Try to delete the symbolic link to root folder
Hopefully this should get rid of the ELOOP error
* Delete also for shell tests and MacOS
* Bump to v2.0.2
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|