Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
|
|
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
|
|
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
|
|
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
|
|
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
|
|
(#13789)
* Some packages (e.g. mpfr at the time of this patch) can have patches
with the same name but different contents (which apply to different
versions of the package). This appends part of the patch hash to the
cache file name to avoid conflicts.
* Some exceptions which occur during fetching are not a subclass of
SpackError and therefore do not have a 'message' attribute. This
updates the logic for mirroring a single spec (add_single_spec)
to produce an appropriate error message in that case (where before
it failed with an AttributeError)
* In various circumstances, a mirror can contain the universal storage
path but not a cosmetic symlink; in this case it would not generate
a symlink. Now "spack mirror create" will create a symlink for any
package that doesn't have one.
|
|
`ViewDescriptor.regenerate()` calls `get_all_specs()`, which reads
`spec.yaml` files, which is slow. It's fine to do this once, but
`view.remove_specs()` *also* calls it immediately afterwards.
- [x] Pass the result of `get_all_specs()` as an optional parameter to
`view.remove_specs()` to avoid reading `spec.yaml` files twice.
|
|
`ViewDescriptor.regenerate()` was copying specs and stripping build
dependencies, which clears `_hash` and other cached fields on concrete
specs, which causes a bunch of YAML hashes to be recomputed.
- [x] Preserve the `_hash` and `_normal` fields on stripped specs, as
these will be unchanged.
|
|
`os.path.exists()` will report False if the target of a symlink doesn't
exist, so we can avoid a costly call to realpath here.
|
|
`spack install` previously concretized, writes the entire environment
out, regenerated views, then wrote and regenerated views
again. Regenerating views is slow, so ensure that we only do that once.
- [x] add an option to env.write() to skip view regeneration
- [x] add a note on whether regenerate_views() shouldn't just be a
separate operation -- not clear if we want to keep it as part of write
to ensure consistency, or take it out to avoid performance issues.
|
|
Environments need to read the DB a lot when installing all specs.
- [x] Put a read transaction around `install_all()` and `install()`
to avoid repeated locking
|
|
Our `LockTransaction` class was reading overly aggressively. In cases
like this:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
```
The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time. `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.
- [x] `Lock.acquire_write()` return `False` in cases where we already had
a read lock.
|
|
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:
```
1 with spack.store.db.read_transaction():
2 with spack.store.db.write_transaction():
3 ...
4 with spack.store.db.read_transaction():
...
```
The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.
The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.
The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it. Since the DB was never written, we got stale data.
- [x] Make `Lock.release_write()` return `True` whenever we release the
*last write* in a nest.
|
|
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released. This is
pretty obviously wrong.
- [x] Refactor `Lock` so that a release function can be passed to the
`Lock` and called *only* when a lock is really released.
- [x] Refactor `LockTransaction` classes to use the release function
instead of checking the return value of `release_read()` / `release_write()`
|
|
`ViewDescriptor.regenerate()` checks repeatedly whether packages are
installed and also does a lot of DB queries. Put a read transaction
around the whole thing to avoid repeatedly locking and unlocking the DB.
|
|
|
|
|
|
Users can now list mirrors of the main url in packages.
- [x] Instead of just a single `url` attribute, users can provide a list (`urls`) in the package, and these will be tried by in order by the fetch strategy.
- [x] To handle one of the most common mirror cases, define a `GNUMirrorPackage` mixin to handle all the standard GNU mirrors. GNU packages can set `gnu_mirror_path` to define the path within a mirror, and the mixin handles setting up all the requisite GNU mirror URLs.
- [x] update all GNU packages in `builtin` to use the `GNUMirrorPackage` mixin.
|
|
(#14228)
|
|
We shouldn't allow packages to have missing dependencies in the mainline.
- [x] Add a test to enforce this.
|
|
- Add an optional argument so that `possible_dependencies()` will report
missing dependencies.
- Add a test to ensure it works.
- Ignore missing dependencies in `possible_dependencies()` by default.
|
|
- this version allows getting possible dependencies of multiple packages
or specs at once.
- New method handles calling `PackageBase.possible_dependencies` multiple
times and passing `visited` dict around.
|
|
`Environment.added_specs()` has a loop around calls to
`Package.installed()`, which can result in repeated DB queries. Optimize
this with a read transaction in `Environment`.
|
|
`spack spec -I` queries the database for installation status and should
use a read transaction around calls to `Spec.tree()`.
|
|
Checks for deprecated specs were repeatedly taking out read locks on the
database, which can be very slow.
- [x] put a read transaction around the deprecation check
|
|
`get_platform()` is pretty expensive and can be called many times in a
spack invocation.
- [x] memoize `get_platform()`
|
|
doesn't understand a custom, user-defined compiler version. However, if
the compiler's version check fails, you can't build anything with the
custom compiler.
- [x] Be more lenient: fall back to the custom compiler version and use
it verbatim if the version check fails.
|
|
`pgcc -V` was failing on power machines because it returns 2 (despite
correctly printing version information). On x86_64 machines the same
command returns 0 and doesn't cause an error.
- [x] Ignore return value of 2 for pgcc when doign a version check
|
|
Thanks!
|
|
Vendors for ARM come out of `/proc/cpuinfo` as hex numbers instead of readable strings.
- Add support for associating vendor names with the hex numbers.
- Also move these mappings from Python code to `microarchitectures.json`
- Move darwin feature name mappings to `microarchitectures.json` as well
|
|
|
|
* when constructing package hash, default to including a method in the content hash if we can't determine whether it would be included by examining the AST
* add a test for updated content-hash calculations
* refactor content hash tests to eliminate repeated lines
|
|
Prevent `spack help install` from getting too cluttered with CDash-specific documentation.
|
|
|
|
BundlePackages use a noop fetch strategy. The mirror logic was assuming
that the fetcher had a resource to cach after performing a fetch. This adds
a special check to skip caching if the stage is associated with a
BundleFetchStrategy. Note that this should allow caching resources
associated with BundlePackages.
|
|
When updating a mirror, Spack was re-retrieving all patches (since the
fetch logic for patches is separate). This updates the patch logic to
allow the mirror logic to avoid this.
|
|
Since cache_mirror does the fetch itself, it also needs to do the
checksum itself if it wants to verify that the source stored in the
mirror is valid. Note that this isn't strictly required because fetching
(including from mirrors) always separately verifies the checksum.
|
|
The targets for the cosmetic paths in mirrrors were being calculated
incorrectly as of fb3a3ba: the symlinks used relative paths as targets,
and the relative path was computed relative to the wrong directory.
|
|
When creating a cosmetic symlink for a resource in a mirror, remove
it if it already exists. The symlink is removed in case the logic to
create the symlink has changed.
|
|
* pytest: add __init__ files for all test subdirs
* add licenses to empty files
* Fix Sphinx warning message about comment within docstring
* Further fixes to Sphinx docstring
|
|
(#13128)
fixes #13124
|
|
* fix docstring in generate_package_index() refering to "public" keys as "signing" keys
* use explicit kwargs in push_to_url()
* simplify url_util.parse() per tgamblin's suggestion
* replace standardize_header_names() with the much simpler get_header()
* add some basic tests
* update s3_fetch tests
* update S3 list code to strip leading slashes from prefix
* correct minor warning regression introduced in #11117
* add more tests
* flake8 fixes
* add capsys fixture to mirror_crud test
* add get_header() tests
* use get_header() in more places
* incorporate review comments
|
|
This PR allows virtual packages to be added to the specs list using
the add command.
Virtual packages are already allowed in named lists in spack
environments/stacks, and they are already allowed in the specs list
when added using the yaml directly.
|
|
* Apply URLFetchStrategy to ftp:// and ftps:// url schemes
* Corrected trailing whitespace error
|
|
I have, more than once, tried to install the list of things that need
to build the docs, only to discover that the list doesn't use Spack's
package names. I'm tired of facepalming....
While I was there I touched up the prose about activating the new
Python packages; activating a python package doesn't add anything to
your PYTHONPATH, it links things into a directory that's *already* on
your PYTHONPATH. Note that this all presupposes that you're using
that same python....
|
|
* CUDA HeaderList: Unit Test
* Spec Header Dirs: Only first include/
Avoid matching recurringly nested include paths that usually
refer to internally shipped libraries in packages.
Example in CUDA Toolkit, shipping a libc++ fork internally
with libcu++ since 10.2.89:
`<prefix>/include/cuda/some/more/details/include/` or
`<prefix>/include/cuda/std/detail/libcxx/include`
regex: non-greedy first match of include
Co-Authored-By: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* CUDA: Re-Enable 10.2.89 as Default
|
|
* Bugfix: Display template options for create command.
* Alphabetize "spack create" template options for readability
* Revert template choices format; alphabetize list
* flake8 fix
|
|
* force spack find -p to print abstract specs without prefixes
* hashes have the same issue; improve handling of find -L to match find -l
|
|
* Minimal BundlePackage build system doc
* Add link to new bundlepackage file
* Fixed link bug and added create command example
|