Age | Commit message (Collapse) | Author | Files | Lines |
|
Allows spack.config InternalConfigScope and Configuration.set() to
handle keys with trailing ':' to indicate replacement vs merge
behavior with respect to lower priority scopes.
Lists may now be replaced rather than merged (this behavior was
previously only available for dictionaries).
This commit adds tests for the new behavior.
|
|
* Recover coverage from subprocesses during unit tests
|
|
|
|
|
|
Testing the install StopIteration exception resulted in an attribute error:
AttributeError: 'StopIteration' object has no attribute 'message'
This PR adds a unit test and resolves that error.
|
|
This recovers the old behavior of replace_prefix_bin that was
modified to work with elf binaries by prefixing os.sep to new prefix
until length is the same as old prefix.
|
|
Removed the code that was converting the old index.yaml format into
index.json. Since the change happened in #2189 it should be
considered safe to drop this (untested) code.
|
|
* use os.sep instead of null byte to pad replacement paths in binaries
* Prefix the pading os.sep's
|
|
Spack's fflags are meant for both f77 and fc. Therefore, they must
be passed as FFLAGS and FCFLAGS to the configure scripts of
Autotools-based packages.
|
|
The distributed build PR (#13100) -- did not check the install status of dependencies when using the `--only package` option so would refuse to install a package with the claim that it had uninstalled dependencies whether that was the case or not.
- [x] add install status checks for the `--only package` case.
- [x] add initial set of tests
|
|
* Fix detection of redhat enterprise compute node
* Add comma
* Remove space
|
|
This allows packages to override the global connect_timeout.
|
|
This allows us to support higher-level concepts such as 'cookie' and
'timeout' without users having to specify curl options.
|
|
connect_timeout can be used to increase the time Spack waits for the
server to answer. This can be used to work around slow connections or
servers.
Fixes #14700
|
|
* CudaPackage: add support for Tesla K80 and older CUDA
* Flake8 fixes
* Fix cuda_arch when no arch is set
* Fine-tune cuda_arch=37,50 supported CUDA versions
* CUDA 6.5+ supports SM_37
* Add @svenevs as a maintainer
|
|
|
|
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.
- [x] populate dependents when loading from yaml
|
|
|
|
This PR corrects a few minor things and adds a note about colorized
output.
|
|
buildcaches. (#15090)
* Make -d directory a required option. Print messages about where buildcaches will be written.
* Add mutually exclusive required options
* spack commands --update-completion
* Apply @opadron's patch
* Update share/spack/spack-completion.bash
* Incorporate @opadron's suggestions
|
|
* add --only option to buildcache create cmd
replaces the --no-deps option
|
|
* remove catch-all exception handling in buildcache command
* fix test
|
|
buildcaches on linux (#15192)
* Buildcache command: add install option -o/--otherarch
This will allow matching specs from other archs, for example
installing macOS buildcaches on linux hosts.
* spack commands --update-completion
|
|
|
|
args.specs is a list, which results in output like this:
```
eval `spack load --sh ['libxml2', 'xz']`
```
We want this instead:
```
eval `spack load --sh libxml2 xz`
```
|
|
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
|
|
|
|
|
|
A generic python dependency is already added implicitly by the
PythonPackage class.
|
|
* Emit a sensible error message if compiler's target is overly specific
fixes #14798
fixes #13733
Compiler specifications require a generic architecture family as
their target. This commit improves the error message that is
displayed to users if they edit compilers.yaml and use an overly
specific name.
|
|
Update the copyright for `installer.py`.
|
|
Restore package-related unsigned binary changes from PR 11107
|
|
The hashing logic looks for function calls that are Spack directives.
It expects that when a Spack directive is used that it is referenced
directly by name, and that the directive function is not itself
retrieved by calling another function. When the hashing logic
encountered a function call where the function was determined
dynamically, it would fail (attempting to access a name attribute
that does not happen to exist in this case).
This updates the hashing logic to filter out function calls where the
function is determined dynamically when looking for uses of Spack
directives.
|
|
* Use get_spec in relocated _try_install_from_binary_cache
|
|
Spack now requires an exact match of the compiler version
requested by the user. A loose constraint can be given to
Spack by using a version range instead of a concrete version
(e.g. 4.5: instead of 4.5).
|
|
Sometimes one needs to preserve the (relative order) of
mtimes on installed files. So it's better to just copy
over all the metadata from the source tree to the install
tree. If permissions need fixing, that will be done anyway
afterwards.
One major use case are resource()s:
They're unpacked in one place and then copied to their
final place using install_tree(). If the resource is a
source tree using autoconf/automake, resetting mtimes
uncorrectly might force unwanted autoconf/etc calls.
|
|
|
|
If the mimetype returned from `file -h -b --mime-type` contains slashes
in its subtype, the tuple returned from `spack.relocate.mime_type` will
have a size larger than two, which leads to errors.
Change-Id: I31de477e69f114ffdc9ae122d00c573f5f749dbb
|
|
This reproduces the behavior expected by patchelf_is_relocatable test.
|
|
Fixes #9394
Closes #13217.
## Background
Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`. This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.).
The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method.
Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`). The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`).
This PR adds support for distributed (single- and multi-node) parallel builds. The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages.
## Approach
### File System Locks
Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks. The runs can be a combination of interactive and batch processes affecting the same file system. Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed.
Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes. The resulting file contains the failing spec to facilitate manual debugging.
### Priority Queue
Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies.
Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix. Consequently, packages can no longer get away with an install method consisting of `pass`, for example.
## Caveats
- This still only parallelizes a single-rooted build. Multi-rooted installs (e.g., for environments) are TBD in a future PR.
Tasks:
- [x] Adjust package lock timeout to correspond to value used in the demo
- [x] Adjust database lock timeout to reduce contention on startup of concurrent
`spack install <spec>` calls
- [x] Replace (test) package's `install: pass` methods with file creation since post-install
`sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!`
- [x] Resolve remaining existing test failures
- [x] Respond to alalazo's initial feedback
- [x] Remove `bin/demo-locks.py`
- [x] Add new tests to address new coverage issues
- [x] Replace built-in package's `def install(..): pass` to "install" something
(i.e., only `apple-libunwind`)
- [x] Increase code coverage
|
|
* Check for tar.bz2 and set tar.gz if not found
* Move check for tarfile after it is extracted
|
|
* skip gpg tests when no gpg executable
* flake
|
|
* spack extensions prints list of extendable packages
* Update tab completion scripts
|
|
copy to/from prefix. (#15003)
* Buildcache creation change the way prefix is copied to workdir.
* install_tree copies hardlinked files
* tarfile creates hardlinked files on extraction.
* create a temporary tarfile from prefix and extract it to workdir
* Use temp tarfile to move workdir to prefix to preserve hardlinks instead of copying
|
|
|
|
fixes #14927
|
|
exceptions. (#14929)
* Replace direct call to patchelf with get_existing_elf_rpaths which handles exceptions.
* Remove unused patchelf definition.
* Convert to set.
|
|
It's often useful to run a module with `python -m`, e.g.:
python -m pyinstrument script.py
Running a python script this way was hard, though, as `spack python` did
not have a similar `-m` option. This PR adds a `-m` option to `spack
python` so that we can do things like this:
spack python -m pyinstrument ./test.py
This makes it easy to write a script that uses a small part of Spack and
then profile it. Previously thee easiest way to do this was to write a
custom Spack command, which is often overkill.
|
|
- `git -C` doesn't work on git before 1.8.5
- `working_dir` gets us the same effect
|
|
Fixes #10019
If multiple instances of a package were installed in a single
instance of Spack, and they differed in terms of dependencies, then
"spack find" would not distinguish specs based on their dependencies.
For example if two instances of X were installed, one with Y and one
with Z, then "spack find X ^Y" would display both instances of X.
|