Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
* Add new package: py-boom
* change name to py-boom-boot-manager
|
|
|
|
Running v3.1.2 with Python 3.7 returns a SyntaxError close to `async=True`.
|
|
* Add new package: abi-dumper
* refine dependencies
|
|
* Updated cuDNN package to check to make sure that target directory
exists before linking it.
* Fixed flake8
* Fixed Flake8
|
|
* allows UCX since v1.7 to build with more recent version of gdrcopy (v2.X)
* Update var/spack/repos/builtin/packages/ucx/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
There was an error introduced in #19209 where `full_hash()` and
`build_hash()` are called on older specs that we've read in from the DB;
older specs may not be able to compute these hashes (e.g. if they have
removed patches used in computing the full_hash).
When serializing a Spec, we want to generate the full/build hash when
possible, but we need a mechanism to skip it for Specs that have
themselves been read from YAML (and may not support this).
To get around this ambiguity and to fix the issue, we:
- Add an attribute to the spec called `_hashes_final`, that is `True`
if we can't lazily compute `build_hash` and `full_hash`.
- Set `_hashes_final` to `False` for new specs (i.e., lazily
computing hashes is ok)
- Set `_hashes_final` to `True` for concrete specs read in via
`from_node_dict`, as it may be too late to recompute hashes.
- Compute and write out all hashes in `node_dict_with_hashes` *if
possible*.
Effectively what this means is that we can round-trip specs that are
missing `_build_hash` and `_full_hash` without recomputing them, but for
all new specs, we'll compute them and store them. So Spack should work
fine with old DBs now.
|
|
|
|
|
|
* root: update to built-in CMakePackage functions
* root: Disable options from missing variants
* Remove modification of CMAKE_PROGRAM_PATH
|
|
Co-authored-by: Scott McMillan <smcmillan@nvidia.com>
|
|
fixes #19625
|
|
Currently fails with:
Error: NoLibrariesError: Unable to recursively locate rocm-device-libs libraries
|
|
* hip: rocminfo is a runtime requirement
* hip: +setup_run_environment, +setup_dependent_run_environment
* hip: run environment: get lib dir using libs.directories[0], not prefix.lib
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
This fixes sbang relocation when using old binary packages, and updates
code in `relocate.py`.
There are really two places where we would want to handle an `sbang`
relocation:
1. Installing an old package that uses `sbang` with shebang lines like
`#!/bin/bash $spack_prefix/sbang`
2. Installing a *new* package that uses `sbang` with shebang lines like
`#!/bin/sh $install_tree/sbang`
The second case is actually handled automatically by our text relocation;
we don't need any special relocation logic for new shebangs, as our
relocation logic already changes references to the build-time
`install_tree` to point to the `install_tree` at intall-time.
Case 1 was not properly handled -- we would not take an old binary
package and point its shebangs at the new `sbang` location. This PR fixes
that and updates the code in `relocation.py` with some notes.
There is one more case we don't currently handle: if a binary package is
created from an installation in a short prefix that does *not* need
`sbang` and is installed to a long prefix that *does* need `sbang`, we
won't do anything. We should just patch the file as we would for a normal
install. In some upcoming PR we should probably change *all* `sbang`
relocation logic to be idempotent and to apply to any sort of shebang'd
file. Then we'd only have to worry about which files to `sbang`-ify at
install time and wouldn't need to care about these special cases.
|
|
|
|
|
|
|
|
|
|
* add python-docutils dependency
* adds symlink to script for better compatibility if py-docutils installation
* Improve post_install phase of py-docutils
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix review of rdma-core package
* improve formating of py-docutils package
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
|
|
|
|
|
|
|
|
See https://github.com/spack/spack/pull/19501#issuecomment-718406357
|
|
This also includes handling of the new libdebuginfod flags.
|
|
Remove deprecated selfforward option, on by default
Rename verbose option to debug option
|
|
|
|
* [NEW] Added amdfftw, amdlibflame and amdscalapack recipes
Updated base fftw, libflame and netlib-scalapack recipes
to accommodate the above listed AMD Optimizing CPU Libraries
which are a set of numerical routines optimized for AMD platforms.
Updated amdblis spack recipe
amdblis:
1. updated with amdblis 2.2 release
amdfftw:
1. "--enable-single" now work as synonym for "--enable-float"
amdlibflame:
1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag
Libflame:
1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag
2. Corrected invocation of "enable_or_disable('threads')"
Change-Id: I9da0a2c2c4e2075b7fa2776e7cfe6548a2e0b32f
* Added amd-toolchain-support as maintainers
Added team github account amd-toolchain-support
as maintainers for all the recipes owned by
AMD Optimizing CPU Libraries (AOCL) team
Change-Id: I9a7969bd48fc42cfbb88dd7bd93e0802c6138582
* Incorporated review comments
Updated packages.yaml with aocl components
Handled Flake8 test failures
Change-Id: I0a03f02d8c9f326b2434ec907958c3de3a8e18eb
* Readded accidental removal of stream recipe
amdfftw:
1. Updated the aocc clang selection as per spack standards
fftw:
1. Currently apple-clang section is redundant,
already it is handled in the conflict checks.
Change-Id: Idef4a3f61717eb81f321e0cd16e7ba9619eac846
* Fix for style and docs/validate (pull_request) test
unnumbered format placeholders from {} to {0}
Change-Id: If67a3374177ec067573e5504462d257712fafc05
* changed compiler references to Spack's compiler wrapper:spack_cc, spack_cxx, spack_fc
Change-Id: I7ae29c978fff16e37773913f14c84df232499763
* Removed 'single' variant from amdfftw recipe
Instead of conflict for apple-clang + openmp, handled this senario
via below available feature:
depends_on('llvm-openmp', when='%apple-clang +openmp')
Change-Id: I701b23d83e822a500ca3aaf2b60cc9ace09e13dc
* Added relevant info for users who prefers to use single precision
Change-Id: I3506e21da428ddef5fb7895b5aaed32c2a061ef6
* Minor changes on fftw, amdfftw and libflame
amdfftw:
1. Removed escape symbol to the single quotes
2. Rewording the conflict line from Recommended
to Required
fftw:
1. Reorded to following recommended sections:
versions, variants, dependencies, providers,
patches
libflame:
1. Added provides entry for 5.1.0 version
Change-Id: I21ebff99b6dfde031763154693ecb3f1fa47b476
* Removed single quote from amdfftw docstring to fix style failures
Change-Id: Ife939a5a2f5ccbc8879b730c7bebfe2fcfef9332
|
|
|
|
|
|
* gqrx and dependencies
* changes
* forgot log4cpp
* misc dep fixes
|
|
* camp: changes to support hip build
* hip: add fallback path for external hip to detect other rocm components
Co-authored-by: Greg Becker <becker33@llnl.gov>
|
|
fixes #15183
- Moved the container related content from
workflows.rst into containers.rst
- Deleted the docker_for_developers.rst file,
since it describes an outdated procedure
Co-authored-by: Axel Huebl <a.huebl@hzdr.de>
Co-authored-by: Omar Padron <omar.padron@kitware.com>
|
|
`config.get_config` now caches the results and returns the same
configuration if called multiple times with the same arguments
(i.e. the same section and scope).
As a consequence, it is expected that users will always call
update methods provided in the `config` module after changing
the configuration (even if manipulating it as a Python nested
dictionary). The following two examples should cover most
scenarios:
* Most configuration update logic in the core (e.g. relating to
adding new compiler) should call `Configuration.update_config`
* Tests that need to change the global configuration should use the
newly-provided `config.replace_config` function.
(if neither of these methods apply, then the essential requirement
is to use a method marked as `_config_mutator`)
Failure to call such a function after modifying the configuration
will lead to unexpected results (e.g. calling `get_config` after
changing the configuration will not reflect the changes since the
first call to get_config).
|
|
- Added archspec to the list of vendored dependencies
- Removed every reference to llnl.util.cpu
- Removed tests from Spack code base
|
|
Co-authored-by: Cleveland <cleveland@lanl.gov>
|
|
* Add new package: wayland
* remove duplicated dependency
|
|
* Patched hypre to better add flags based on compiler.
* Update package.py
This file seems to have lots of edits, so the patch may succeed with offsets. Has anyone checked with spack patch to be sure it'll work with versions 2.15 - 2.20?
|
|
* py-pycbc: Fix for aarch64
* py-pycbc: Change patch application conditions
|
|
* "spack install" now has a "--require-full-hash-match" option, which
forces Spack to skip an available binary package when the full hash
doesn't match. Normally only a DAG-hash match is required, which
ensures equivalent Specs, but does not account for changing logic
inside the associated package.
* Add a local binary cache index which tracks specs that have a binary
install available in a remote binary cache. It is updated with
"spack buildcache list" or for a given spec when a binary package
is retrieved for that Spec.
|
|
In #18394 it was noted, that this package should be changed
from a generic "Package" to a "CMakePackage".
It makes a bunch of things easier.
And it uses all the common cmake code.
|
|
* diamond: The version 2.0.4 file name has been changed.
* diamond: Removed v2.0.4 URL.
|
|
* Added hash values for LBANN v0.101 and Hydrogen v1.5.0. Updated the
LBANN package to be more successful in resolving a legal configuration
of MPI and HWLOC packages. This required the removal of the MPI
virtual package since it is unable to resolve dependencies with
minimum version requirements. As a result to enable a reasonable
install line for LBANN this requires explicit forwarding of MPI
variants to Hydrogen and Aluminum. Due to the lack of variant
forwarding, there are many explicitly replicated dependencies for both
LBANN and Hydrogen. Fixed the error in LBANN where gpu variant was
replaced by the cuda variant, but not all dependencies were fixed.
* Fixed the minumum cuDNN version for newer versions of LBANN.
* Added explicit versioning of the MPI libraries for DiHydrogen to avoid
all of the conflicts with minimum required versions of the OpenMPI library.
* Removed explicit MPI versions and went back to using the MPI virtual
dependency. Updated construction of variant forwarding to use
iterative construction of constraints and variants. This exacerbates
the challenges with backtracking in the current concretizer, but
should be fixed in the new concretizer.
* Added support for including the DiHydrogen library in LBANN as well as
support for the distributed convolution (DistConv) parallel
algorithms. Also include support for building with half precision.
* Moving dependencies around
* Added conflict statement to ensure that the variant dihydrogen is
required for distconv.
* Removed the preferred field
* Fixed Flake8 and cuDNN version bounds
|
|
|
|
|
|
|
|
|
|
* [py-cudf] created template
* [py-cudf] set build directory
* [py-cudf] adding dependencies
* [py-cudf] fixed phases
* [py-cudf] added cmake to fetch some third party dependencies
* [py-cudf] flake8
* [py-cudf] added homepage and description. removed fixmes
* [py-cudf] added dependency of py-cupy
* [py-cudf] depends on py-fsspec
* [py-cudf] py-pyarrow requires +orc
* [py-cudf] py-pyarrow requires +parquet
* [py-cudf] removed upper python limit
* [py-cudf] checksum changed
|