Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
This PR will add a new audit, specifically for spack package homepage urls (and eventually
other kinds I suspect) to see if there is an http address that can be changed to https.
Usage is as follows:
```bash
$ spack audit packages-https <package>
```
And in list view:
```bash
$ spack audit list
generic:
Generic checks relying on global variables
configs:
Sanity checks on compilers.yaml
Sanity checks on packages.yaml
packages:
Sanity checks on specs used in directives
packages-https:
Sanity checks on https checks of package urls, etc.
```
I think it would be unwise to include with packages, because when run for all, since we do requests it takes a long time. I also like the idea of more well scoped checks - likely there will be other addresses for http/https within a package that we eventually check. For now, there are two error cases - one is when an https url is tried but there is some SSL error (or other error that means we cannot update to https):
```bash
$ spack audit packages-https zoltan
PKG-HTTPS-DIRECTIVES: 1 issue found
1. Error with attempting https for "zoltan":
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'www.cs.sandia.gov'. (_ssl.c:1125)>
```
This is either not fixable, or could be fixed with a change to the url or (better) contacting the site owners to ask about some certificate or similar.
The second case is when there is an http that needs to be https, which is a huge issue now, but hopefully not after this spack PR.
```bash
$ spack audit packages-https xman
Package "xman" uses http but has a valid https endpoint.
```
And then when a package is fixed:
```bash
$ spack audit packages-https zlib
PKG-HTTPS-DIRECTIVES: 0 issues found.
```
And that's mostly it. :)
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
Sometimes users need to be able to override the conflicts in `CudaPacakge`. This introduces a variant to enable/disable them.
|
|
|
|
|
|
* Add a __reduce__ method to Spec
fixes #23892
The recursion limit seems to be due to the default
way in which a Spec is serialized, following all
the attributes. It's still not clear to me why this
is related to being in an environment, but in any
case we already have methods to serialize Specs to
disk in JSON and YAML format. Here we use them to
pickle a Spec instance too.
* Downgrade to build-hash
Hopefully nothing will change the package in
between serializing the spec and sending it
to the child process.
* Add support for Python 2
|
|
(#25583)
* Make sure PackageInstaller does not remove the just-restored
install dir after failure in spack install --overwrite
* Remove cryptic error message and rethrow actual error
|
|
|
|
The gcc compiler can be configured to use `ld.gold` by default. It will
then call `ld.gold` explicitly when linking. When so, spack need to have
a ld.gold wrapper in PATH to inject rpaths link flags etc...
Also I wouldn't be surprised to see some package calling `ld.gold`
directly.
As for ld.gold, the argument could be made that we want to support any
package that could call ld.lld.
|
|
* Add a __reduce__ method to SpecBuildInterface
This class was confusing pickle when being serialized,
due to its scary nature of being an object that disguise
as another type.
* Add more MacOS tests, switch them to clingo
* Fix condition syntax
* Remove Python v3.6 and v3.9 with macOS
|
|
* Make sure that spack -e. env activate b and spack -e. env deactivate error
* Add a test
|
|
* Conditionally remove 'context' from kwargs in _urlopen
Previously, 'context' is purged from kwargs in _urlopen to
conform to varying support for 'context' in different versions
of urllib. This fix tries to use 'context', and then removes
it if an exception is thrown and tries again.
* Specify error type in try statement in _urlopen
Specify TypeError when checking if 'context' is in kwargs
for _urlopen. Also, if try fails, check that 'context' is
in the error message before removing from kwargs.
|
|
This is a direct followup to #13557 which caches additional attributes that were added in #24095 that are expensive to compute. I had to reopen #25556 in another PR to invalidate the GitLab CI cache, but see #25556 for prior discussion.
### Before
```console
$ time spack env activate .
real 2m13.037s
user 1m25.584s
sys 0m43.654s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 16m3.541s
user 10m28.892s
sys 4m57.816s
$ time spack env deactivate
real 2m30.974s
user 1m38.090s
sys 0m49.781s
```
### After
```console
$ time spack env activate .
real 0m8.937s
user 0m7.323s
sys 0m1.074s
$ time spack env view regenerate
==> Updating view at /Users/Adam/.spack/.spack-env/view
real 2m22.024s
user 1m44.739s
sys 0m30.717s
$ time spack env deactivate
real 0m10.398s
user 0m8.414s
sys 0m1.630s
```
Fixes #25555
Fixes #25541
* Speedup environment activation, part 2
* Only query distutils a single time
* Fix KeyError bug
* Make vermin happy
* Manual memoize
* Add comment on cross-compiling
* Use platform-specific include directory
* Fix multiple bugs
* Fix python_inc discrepancy
* Fix import tests
|
|
* Set pubkey trust to ultimate during `gpg trust`
Tries to solve the same problem as #24760 without surpressing stderr
from gpg commands.
This PR makes every imported key trusted in the gpg database.
Note: I've outlined
[here](https://github.com/spack/spack/pull/24760#issuecomment-883183175)
that gpg's trust model makes sense, since how can we trust a random
public key we download from a binary cache?
* Fix test
|
|
Fixes #25603
This commit adds a new context manager to temporarily
deactivate active environments. This context manager
is used when setting up bootstrapping configuration to
make sure that the current environment is not affected
by operations on the bootstrap store.
* Preserve exit code 1 if nothing is found
* Use context manager for the environment
|
|
|
|
fixes #21643
As far as I can see the double loop is not needed, since
"patch" is never used and the items in the list are tuples
of three values.
|
|
This object was introduced in #18124, and was later superseded by
#18205 and removed any use if the object.
|
|
This commit adds a regression test for version selection
with preferences in `packages.yaml`. Before PR 25585 we
used negative weights in a minimization to select the
optimal version. This may lead to situations where a
dependency may make the version score of dependents
"better" if it is preferred in packages.yaml.
|
|
PackageInstaller and Package.installed disagree over what it means
for a package to be installed: PackageInstaller believes it should be
enough for a database entry to exist, whereas Package.installed
requires a database entry & a prefix directory.
This leads to the following niche issue:
* a develop spec in an environment is successfully installed
* then somehow its install prefix is removed (e.g. through a bug fixed
in #25583)
* you modify the sources and reinstall the environment
1. spack checks pkg.installed and realizes the develop spec is NOT
installed, therefore it doesn't need to have 'overwrite: true'
2. the installer gets the build task and checks the database and
realizes the spec IS installed, hence it doesn't have to install it.
3. the develop spec is not rebuilt.
The solution is to make PackageInstaller and pkg.installed agree over
what it means to be installed, and this PR does that by dropping the
prefix directory check from pkg.installed, so that it only checks the
database.
As a result, spack will create a build task with overwrite: true for
the develop spec, and the installer in fact handles overwrite requests
fine even if the install prefix doesn't exist (it just does a normal
install).
|
|
see #25563
When we have a concrete environment and we ask to install a
concrete spec from a file, currently Spack returns a list of
specs that are all the one that match the argument DAG hash.
Instead we want to compare build hashes, which also account
for build-only dependencies.
|
|
#25303 filtered padding from build output, but it's still there in binary install/relocate output,
so our CI logs are still quite long and frequently hit the limit.
- [x] add context handler from #25303 to buildcache installation as well
|
|
This allows you to run `spack graph --installed` from within an environment and get a dot graph of
its concrete specs.
- [x] make `spack graph -i` environment-aware
- [x] add code to the generated dot graph to ensure roots have min rank (i.e., they're all at the
top or left of the DAG)
|
|
Bootstrapping clingo on macOS on `develop` gives errors like this:
```
==> Error: RuntimeError: Unable to locate python command in /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/bin
/Users/gamblin2/Workspace/spack/var/spack/repos/builtin/packages/python/package.py:662, in command:
659 return Executable(path)
660 else:
661 msg = 'Unable to locate {0} command in {1}'
>> 662 raise RuntimeError(msg.format(self.name, self.prefix.bin))
```
On macOS, `python` is laid out differently. In particular, `sys.executable` is here:
```console
Python 2.7.16 (default, May 8 2021, 11:48:02)
[GCC Apple LLVM 12.0.5 (clang-1205.0.19.59.6) [+internal-os, ptrauth-isa=deploy on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.executable
'/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python'
```
Based on that, you'd think that
`/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents` would be
where you'd look for a `bin` directory, but you (and Spack) would be wrong:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/
Info.plist MacOS/ PkgInfo Resources/ _CodeSignature/ version.plist
```
You need to look in `sys.exec_prefix`
```
>>> sys.exec_prefix
'/System/Library/Frameworks/Python.framework/Versions/2.7'
```
Which looks much more like a standard prefix, with understandable `bin`, `lib`, and `include`
directories:
```console
$ ls /System/Library/Frameworks/Python.framework/Versions/2.7
Extras/ Mac/ Resources/ bin/ lib/
Headers@ Python* _CodeSignature/ include/
$ ls -l /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
lrwxr-xr-x 1 root wheel 7B Jan 1 2020 /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python@ -> python2
```
- [x] change `bootstrap.py` to use the `sys.exec_prefix` as the external prefix, instead of just
getting the parent directory of the executable.
|
|
|
|
|
|
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.
The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
needed (this avoids inadvertently releasing locks that should not be released).
|
|
* Fix bindist network issues
* Another one using the network
|
|
This commit rework version facts so that:
1. All the information on versions is collected
before emitting the facts
2. The same kind of atom is emitted for versions
stemming from different origins (package.py
vs. packages.yaml)
In the end all the possible versions for a given
package are totally ordered and they are given
different and increasing weights staring from zero.
This refactor allow us to avoid using negative
weights, which in some configurations may make
parent node score "better" and lead to unexpected
"optimal" results.
|
|
|
|
|
|
|
|
Once PR binary graduation is deployed, the shared PR mirror will
contain binaries just built by a merged PR, before the subsequent
develop pipeline has had time to finish. Using the shared PR mirror
as a source of binaries will reduce the number of times we have to
rebuild the same full hash.
|
|
* CI for #25439 was not run on the latest merge commit, and fails after #25470
* Make it consistent
|
|
not depend on spack.cmd. (#25439)
* Refactor active environment getters
- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for
commands that require an environment (rather than abusing
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation
* Python 2: toplevel import errors only with 'as ev'
In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
|
|
|
|
* Bootstrap clingo from binaries
* Move information on clingo binaries to a JSON file
* Add support to bootstrap on Cray
Bootstrapping on Cray requires, at the moment, to
swap the platform when looking for binaries - due
to #22800.
* Add SHA256 verification for bootstrapped software
Use sha256 verification for binaries necessary to bootstrap
the concretizer and gpg for signature verification
* patchelf: use Spec._old_concretize() to bootstrap
As noted in #24450 we may happen to need the
concretizer when bootstrapping clingo. In that case
only the old concretizer is available.
* Add a schema for bootstrapping methods
Two fields have been added to bootstrap.yaml:
"sources" which lists the methods available for
bootstrapping software
"trusted" which records if a source is trusted or not
A subcommand has been added to "spack bootstrap" to list
the sources currently available.
* Methods used for bootstrapping are configurable from bootstrap:sources
The function that tries to ensure a given Python module
is importable now tries bootstrapping methods in the same
order as they are defined in `bootstrap.yaml`
* Permit to trust/untrust bootstrapping methods
* Add binary tests for MacOS, Ubuntu
* Add documentation
* Add a note on bash
|
|
`markdown` is only supported since py-pygments@2.8.0:, see
https://github.com/pygments/pygments/commit/9647d2ae506b8e05ebabe9243df707bac901a6a3
Let's allow old versions too again.
|
|
Add missing `self.` prefixes when calling `define_from_variant`
|
|
|
|
Spack is internally using a patched version of `argparse` mainly to backport Python 3 functionality
into Python 2. This PR makes it such that for the supported Python 3 versions we use `argparse`
from the standard Python library. This PR has been extracted from #25371 where it was needed
to be able to use recent versions of `pytest`.
* Fixed formatting issues when using a pristine argparse.py
* Fix error message for Python 3.X when missing positional arguments
* Account for the change of API in Python 3.7
* Layout multi-valued args into columns in error messages
* Seamless transition in develop if argparse.pyc is in external
* Be more defensive in case we can't remove the file.
|
|
Add link type to spack.yaml format
Add tests to verify link behavior is correct for installed files
for all three view types
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
|
|
* Add to the error message to help determine failure source.
* Break up long line to keep under 80 chars.
Co-authored-by: Rob Groner <rug262@psu.edu>
|
|
The commands have been deprecated in #7098, and have
been failing with an error message since then.
Cleaning the code since it is unlikely that somebody
is still using them.
|
|
Isn't installed on Alpine.
|
|
|
|
|
|
The star was not rendered in the docs.
|