Age | Commit message (Collapse) | Author | Files | Lines |
|
* Rebase and merging using platform.system
Rebasing and merging using platform.system instead of uname -a.
* Add missing import platform statement
* Remove subprocess import
Remove ununsed import subprocess to make changes flak8 compliant
|
|
|
|
On Cray machines, use the Cray compile wrappers instead of MPI wrappers.
|
|
Finer-grained locking
|
|
|
|
|
|
- Fix a bug handling '/' characters in branch names.
- Make tarballs use a descriptive name for the top-level directory, not
just `opt`.
|
|
|
|
- Locks now use fcntl range locks on a single file.
How it works for prefixes:
- Each lock is a byte range lock on the nth byte of a file.
- The lock file is ``spack.installed_db.prefix_lock`` -- the DB tells us
what to call it and it lives alongside the install DB. n is the
sys.maxsize-bit prefix of the DAG hash.
For stages, we take the sha1 of the stage name and use that to select a
byte to lock.
With 100 concurrent builds, the likelihood of a false lock collision is
~5.36e-16, so this scheme should retain more than sufficient paralellism
(with no chance of false negatives), and get us reader-writer lock
semantics with a single file, so no need to clean up lots of lock files.
|
|
|
|
- Closing and re-opening to upgrade to write will lose all existing read
locks on this process.
- If we didn't allow ranges, sleeping until no reads would work.
- With ranges, we may never be able to take some legal write locks
without invalidating all reads. e.g., if a write lock has distinct
range from all reads, it should just work, but we'd have to close the
file, reopen, and re-take reads.
- It's easier to just check whether the file is writable in the first
place and open for writing from the start.
- Lock now only opens files read-only if we *can't* write them.
|
|
|
|
|
|
- Locks will now create enclosing directories and touch the lock file
automatically.
|
|
- Make sure we write, truncate, flush when setting PID and owning host in
the file.
|
|
A use case where the previous approach was failing is :
- more than one spack process running on compute nodes
- stage directory is a link to fast LOCAL storage
In this case the processes may try to unlink something that is "dead" for them, but actually used by other processes on storage they cannot see.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(#1897)
|
|
Fixed flake8 issues
|
|
* spack list : merged package-list into the command
* list : removed option for case sensitivity
|
|
|
|
|
|
Use boost system layout by default
|
|
|
|
|
|
(#1987)
+ This change fixes a problem that manifests when trilinos is built against a
MKL installation defined as an external package. In this scenario, the MKL
libraries are found one directory deeper than for the case where spack
provides MKL. The extra directory is a platform name like 'intel64'.
+ The changes in this PR were recommended by contributor @davydden. I
implemented and tested with intel@16.0.3. These changes fix the issue I
reported. I did not attempt building trilinos against other BLAS
implementations.
+ fixes #1923
|
|
|
|
Bottomline :
- fetcher change the current working directory
- relative paths were resolved differently depending on the prder f evaluation
|
|
|
|
* mfem: add tarball extension
Add tarball extension as a result of a feature added in PR#1926, which
fixes earlier issues in this PR (PR#1202). Prior to adding this feature,
Spack would not autodetect the extension of the tarball downloaded from
the redirected, shorted Google URL, requiring a messy hack. This hack
worked for mfem version 3.1, but led to errors when adding mfem version
3.2 because the files downloaded from Google did not contain the package
name, version number, or extension. Adding the extension enables Spack
to rename the tarball downloaded from Google to a sensible name that is
compatible with its filename parsing algorithms so that Spack "does the
right thing" (detects that the file is a GZipped tarball, decompresses
it, runs GNU Make) in fetching and staging the package.
* mfem: add linkage to KLU & BTF
Add linkage to the KLU & BTF solvers, which are now enabled in MFEM for
versions 3.2 and later.
* mfem: Add superlu-dist variant
Add linkage to SuperLU_DIST, which is a new linear solver interface for
MFEM versions 3.2 and later.
* mfem: add netcdf variant for cubit mesh support
Add NetCDF variant for MFEM versions 3.2 and later; installing the
NetCDF interfaces enables CUBIT mesh support.
|
|
+ Cray compile wrappers are MPI wrappers.
+ Packages that need to be compiled with MPI compile wrappers normally use
'mpicc', 'mpic++' and 'mpif90' provided by the MPI vendor. However, when using
cray-mpich as the MPI vendor, the compile wrappers 'CC', 'cc' and 'ftn' must
be used.
+ In this scenario, the mpich package is hijacked by specifying cray-mpich as an
external package under the 'mpich:' section of packages.yaml. For example:
packages:
mpich:
modules:
mpich@7.4.2%intel@16.0.3 arch=cray-CNL-haswell: cray-mpich/7.4.2
buildable: False
all:
providers:
mpi: [mpich]
+ This change allows packages like parmetis to be built using the Cray compile
wrappers. For example: 'spack install parmetis%intel@16.0.3 ^mpich@7.4.2 os=CNL'
+ This commit relies on the existence of the environment variable CRAYPE_VERSION
to determine if the current machine is running a Cray environment. This check is
insufficient, but I'm not sure how to improve this logic.
+ Fixes #1827
|
|
+ Previouly, these strings were hard coded to 'mpicc', 'mpic++', and 'mpifort'.
|
|
* fix blas-lapack in scipy and numpy
* py-numpy: do not set rpath on macOS
* py-scipy: do not set Blas/Lapack. This appears to be picked up from py-numpy
* py-numpy: don't write rpath= in Sierra only
* py-numpy: add a link to build notes
|
|
Charm++ only creates symbolic links instead of copying files. Correct this.
|
|
Fixes #1939
|
|
Symengine and associated packages
|
|
|
|
Fix spack uninstall -f
|
|
deps. (#1954)
- Some packages (netcdf) NEED RPATHs for transitive deps.
- Others (dealii) will exceed OS limits when the DAG is too large.
|
|
|
|
(#1435)
* Updated all Python extension packages to use 'setup_py' on install.
* Fixed a few minor style issues with the updated Python packages.
|
|
|
|
|
|
* Updated nettle to have m4 as an immediate dependency to match new PATH
construction logic which only includes immediate dependencies.
* Update package.py
|
|
* This fixes a bug in concretization. Before the recent change to the
algorithm, the intent was that the @develop version, although
"greater" than numberic versions, is never preferred BY DEFAULT over
numeric versions.
To test this... suppose you have a package with no `preferred=True` in
it, and nothing in `packages.yaml`, but with a `develop` version. For
the sake of this example, I've hacked my `python/package.py` to work
this way.
Without bugfix (WRONG: user should never get develop by default):
```
python@develop%clang@7.3.0-apple~tk~ucs4 arch=darwin-elcapitan-x86_64
...
```
With bugfix (RIGHT: largest numeric version selected):
```
python@3.5.2%clang@7.3.0-apple~tk~ucs4 arch=darwin-elcapitan-x86_64
...
```
* Documented version selection in concretization algo.
* Fix typos
* flake8
|