summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>2020-02-19 00:04:22 -0800
committerGitHub <noreply@github.com>2020-02-19 00:04:22 -0800
commitf2aca86502eded1500489cd13799d8826e6fc9d2 (patch)
treeb6420e0db1471646d0f2cb94d138d1056478fd50
parent2f4881d582181b275d13ad2098a3c89b563e9f97 (diff)
downloadspack-f2aca86502eded1500489cd13799d8826e6fc9d2.tar.gz
spack-f2aca86502eded1500489cd13799d8826e6fc9d2.tar.bz2
spack-f2aca86502eded1500489cd13799d8826e6fc9d2.tar.xz
spack-f2aca86502eded1500489cd13799d8826e6fc9d2.zip
Distributed builds (#13100)
Fixes #9394 Closes #13217. ## Background Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`. This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.). The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method. Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`). The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`). This PR adds support for distributed (single- and multi-node) parallel builds. The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages. ## Approach ### File System Locks Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks. The runs can be a combination of interactive and batch processes affecting the same file system. Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed. Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes. The resulting file contains the failing spec to facilitate manual debugging. ### Priority Queue Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies. Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix. Consequently, packages can no longer get away with an install method consisting of `pass`, for example. ## Caveats - This still only parallelizes a single-rooted build. Multi-rooted installs (e.g., for environments) are TBD in a future PR. Tasks: - [x] Adjust package lock timeout to correspond to value used in the demo - [x] Adjust database lock timeout to reduce contention on startup of concurrent `spack install <spec>` calls - [x] Replace (test) package's `install: pass` methods with file creation since post-install `sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!` - [x] Resolve remaining existing test failures - [x] Respond to alalazo's initial feedback - [x] Remove `bin/demo-locks.py` - [x] Add new tests to address new coverage issues - [x] Replace built-in package's `def install(..): pass` to "install" something (i.e., only `apple-libunwind`) - [x] Increase code coverage
-rw-r--r--etc/spack/defaults/config.yaml2
-rw-r--r--lib/spack/llnl/util/lock.py261
-rw-r--r--lib/spack/llnl/util/tty/__init__.py4
-rw-r--r--lib/spack/spack/cmd/install.py2
-rw-r--r--lib/spack/spack/database.py185
-rwxr-xr-xlib/spack/spack/installer.py1597
-rw-r--r--lib/spack/spack/main.py9
-rw-r--r--lib/spack/spack/package.py634
-rw-r--r--lib/spack/spack/pkgkit.py5
-rw-r--r--lib/spack/spack/report.py39
-rw-r--r--lib/spack/spack/stage.py3
-rw-r--r--lib/spack/spack/test/buildtask.py66
-rw-r--r--lib/spack/spack/test/cmd/env.py4
-rw-r--r--lib/spack/spack/test/cmd/install.py41
-rw-r--r--lib/spack/spack/test/conftest.py20
-rw-r--r--lib/spack/spack/test/database.py116
-rw-r--r--lib/spack/spack/test/install.py24
-rw-r--r--lib/spack/spack/test/installer.py579
-rw-r--r--lib/spack/spack/test/llnl/util/lock.py54
-rw-r--r--var/spack/repos/builtin.mock/packages/a/package.py4
-rw-r--r--var/spack/repos/builtin.mock/packages/b/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/boost/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/c/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/conflicting-dependent/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid1/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid2/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dep-diamond-patch-top/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/develop-test/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/develop-test2/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/direct-mpich/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dt-diamond-bottom/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dt-diamond-left/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dt-diamond-right/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dt-diamond/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtbuild1/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtbuild2/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtbuild3/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtlink1/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtlink2/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtlink3/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtlink4/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtlink5/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtrun1/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtrun2/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtrun3/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dttop/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/dtuse/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/e/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/externalmodule/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/externalprereq/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/externaltool/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/externalvirtual/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/fake/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/flake8/package.py6
-rw-r--r--var/spack/repos/builtin.mock/packages/git-svn-top-level/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/git-test/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/git-top-level/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/git-url-svn-top-level/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/git-url-top-level/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/hash-test1/package.py10
-rw-r--r--var/spack/repos/builtin.mock/packages/hash-test2/package.py7
-rw-r--r--var/spack/repos/builtin.mock/packages/hash-test3/package.py10
-rw-r--r--var/spack/repos/builtin.mock/packages/hg-test/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/hg-top-level/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/hypre/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/indirect-mpich/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/maintainers-1/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/maintainers-2/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/mixedversions/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/module-path-separator/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/multi-provider-mpi/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/multimodule-inheritance/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/multivalue_variant/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/netlib-blas/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/netlib-lapack/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/nosource-install/package.py7
-rw-r--r--var/spack/repos/builtin.mock/packages/openblas-with-lapack/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/openblas/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/optional-dep-test-2/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/optional-dep-test-3/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/optional-dep-test/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/othervirtual/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/override-context-templates/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/override-module-templates/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/patch-a-dependency/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/patch-several-dependencies/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/patch/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/perl/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/preferred-test/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/python/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/simple-inheritance/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/singlevalue-variant-dependent/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/svn-test/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/svn-top-level/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/url-list-test/package.py6
-rw-r--r--var/spack/repos/builtin.mock/packages/url-test/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/when-directives-false/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/when-directives-true/package.py3
-rw-r--r--var/spack/repos/builtin.mock/packages/zmpi/package.py3
-rw-r--r--var/spack/repos/builtin/packages/apple-libunwind/package.py3
100 files changed, 2954 insertions, 963 deletions
diff --git a/etc/spack/defaults/config.yaml b/etc/spack/defaults/config.yaml
index 3aadccfda1..32745a309a 100644
--- a/etc/spack/defaults/config.yaml
+++ b/etc/spack/defaults/config.yaml
@@ -137,7 +137,7 @@ config:
# when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should
# therefore generally be left untouched.
- db_lock_timeout: 120
+ db_lock_timeout: 3
# How long to wait when attempting to modify a package (e.g. to install it).
diff --git a/lib/spack/llnl/util/lock.py b/lib/spack/llnl/util/lock.py
index 63f970c98b..b295341d48 100644
--- a/lib/spack/llnl/util/lock.py
+++ b/lib/spack/llnl/util/lock.py
@@ -8,14 +8,32 @@ import fcntl
import errno
import time
import socket
+from datetime import datetime
import llnl.util.tty as tty
+import spack.util.string
__all__ = ['Lock', 'LockTransaction', 'WriteTransaction', 'ReadTransaction',
'LockError', 'LockTimeoutError',
'LockPermissionError', 'LockROFileError', 'CantCreateLockError']
+#: Mapping of supported locks to description
+lock_type = {fcntl.LOCK_SH: 'read', fcntl.LOCK_EX: 'write'}
+
+#: A useful replacement for functions that should return True when not provided
+#: for example.
+true_fn = lambda: True
+
+
+def _attempts_str(wait_time, nattempts):
+ # Don't print anything if we succeeded on the first try
+ if nattempts <= 1:
+ return ''
+
+ attempts = spack.util.string.plural(nattempts, 'attempt')
+ return ' after {0:0.2f}s and {1}'.format(wait_time, attempts)
+
class Lock(object):
"""This is an implementation of a filesystem lock using Python's lockf.
@@ -31,8 +49,8 @@ class Lock(object):
maintain multiple locks on the same file.
"""
- def __init__(self, path, start=0, length=0, debug=False,
- default_timeout=None):
+ def __init__(self, path, start=0, length=0, default_timeout=None,
+ debug=False, desc=''):
"""Construct a new lock on the file at ``path``.
By default, the lock applies to the whole file. Optionally,
@@ -43,6 +61,16 @@ class Lock(object):
not currently expose the ``whence`` parameter -- ``whence`` is
always ``os.SEEK_SET`` and ``start`` is always evaluated from the
beginning of the file.
+
+ Args:
+ path (str): path to the lock
+ start (int): optional byte offset at which the lock starts
+ length (int): optional number of bytes to lock
+ default_timeout (int): number of seconds to wait for lock attempts,
+ where None means to wait indefinitely
+ debug (bool): debug mode specific to locking
+ desc (str): optional debug message lock description, which is
+ helpful for distinguishing between different Spack locks.
"""
self.path = path
self._file = None
@@ -56,6 +84,9 @@ class Lock(object):
# enable debug mode
self.debug = debug
+ # optional debug description
+ self.desc = ' ({0})'.format(desc) if desc else ''
+
# If the user doesn't set a default timeout, or if they choose
# None, 0, etc. then lock attempts will not time out (unless the
# user sets a timeout for each attempt)
@@ -89,6 +120,20 @@ class Lock(object):
num_requests += 1
yield wait_time
+ def __repr__(self):
+ """Formal representation of the lock."""
+ rep = '{0}('.format(self.__class__.__name__)
+ for attr, value in self.__dict__.items():
+ rep += '{0}={1}, '.format(attr, value.__repr__())
+ return '{0})'.format(rep.strip(', '))
+
+ def __str__(self):
+ """Readable string (with key fields) of the lock."""
+ location = '{0}[{1}:{2}]'.format(self.path, self._start, self._length)
+ timeout = 'timeout={0}'.format(self.default_timeout)
+ activity = '#reads={0}, #writes={1}'.format(self._reads, self._writes)
+ return '({0}, {1}, {2})'.format(location, timeout, activity)
+
def _lock(self, op, timeout=None):
"""This takes a lock using POSIX locks (``fcntl.lockf``).
@@ -99,8 +144,9 @@ class Lock(object):
successfully acquired, the total wait time and the number of attempts
is returned.
"""
- assert op in (fcntl.LOCK_SH, fcntl.LOCK_EX)
+ assert op in lock_type
+ self._log_acquiring('{0} LOCK'.format(lock_type[op].upper()))
timeout = timeout or self.default_timeout
# Create file and parent directories if they don't exist.
@@ -128,6 +174,9 @@ class Lock(object):
# If the file were writable, we'd have opened it 'r+'
raise LockROFileError(self.path)
+ tty.debug("{0} locking [{1}:{2}]: timeout {3} sec"
+ .format(lock_type[op], self._start, self._length, timeout))
+
poll_intervals = iter(Lock._poll_interval_generator())
start_time = time.time()
num_attempts = 0
@@ -139,17 +188,21 @@ class Lock(object):
time.sleep(next(poll_intervals))
+ # TBD: Is an extra attempt after timeout needed/appropriate?
num_attempts += 1
if self._poll_lock(op):
total_wait_time = time.time() - start_time
return total_wait_time, num_attempts
- raise LockTimeoutError("Timed out waiting for lock.")
+ raise LockTimeoutError("Timed out waiting for a {0} lock."
+ .format(lock_type[op]))
def _poll_lock(self, op):
"""Attempt to acquire the lock in a non-blocking manner. Return whether
the locking attempt succeeds
"""
+ assert op in lock_type
+
try:
# Try to get the lock (will raise if not available.)
fcntl.lockf(self._file, op | fcntl.LOCK_NB,
@@ -159,6 +212,9 @@ class Lock(object):
if self.debug:
# All locks read the owner PID and host
self._read_debug_data()
+ tty.debug('{0} locked {1} [{2}:{3}] (owner={4})'
+ .format(lock_type[op], self.path,
+ self._start, self._length, self.pid))
# Exclusive locks write their PID/host
if op == fcntl.LOCK_EX:
@@ -167,12 +223,12 @@ class Lock(object):
return True
except IOError as e:
- if e.errno in (errno.EAGAIN, errno.EACCES):
- # EAGAIN and EACCES == locked by another process
- pass
- else:
+ # EAGAIN and EACCES == locked by another process (so try again)
+ if e.errno not in (errno.EAGAIN, errno.EACCES):
raise
+ return False
+
def _ensure_parent_directory(self):
parent = os.path.dirname(self.path)
@@ -227,6 +283,8 @@ class Lock(object):
self._length, self._start, os.SEEK_SET)
self._file.close()
self._file = None
+ self._reads = 0
+ self._writes = 0
def acquire_read(self, timeout=None):
"""Acquires a recursive, shared lock for reading.
@@ -242,15 +300,14 @@ class Lock(object):
timeout = timeout or self.default_timeout
if self._reads == 0 and self._writes == 0:
- self._debug(
- 'READ LOCK: {0.path}[{0._start}:{0._length}] [Acquiring]'
- .format(self))
# can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_SH, timeout=timeout)
- self._acquired_debug('READ LOCK', wait_time, nattempts)
self._reads += 1
+ # Log if acquired, which includes counts when verbose
+ self._log_acquired('READ LOCK', wait_time, nattempts)
return True
else:
+ # Increment the read count for nested lock tracking
self._reads += 1
return False
@@ -268,13 +325,11 @@ class Lock(object):
timeout = timeout or self.default_timeout
if self._writes == 0:
- self._debug(
- 'WRITE LOCK: {0.path}[{0._start}:{0._length}] [Acquiring]'
- .format(self))
# can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_EX, timeout=timeout)
- self._acquired_debug('WRITE LOCK', wait_time, nattempts)
self._writes += 1
+ # Log if acquired, which includes counts when verbose
+ self._log_acquired('WRITE LOCK', wait_time, nattempts)
# return True only if we weren't nested in a read lock.
# TODO: we may need to return two values: whether we got
@@ -282,9 +337,65 @@ class Lock(object):
# write lock for the first time. Now it returns the latter.
return self._reads == 0
else:
+ # Increment the write count for nested lock tracking
self._writes += 1
return False
+ def is_write_locked(self):
+ """Check if the file is write locked
+
+ Return:
+ (bool): ``True`` if the path is write locked, otherwise, ``False``
+ """
+ try:
+ self.acquire_read()
+
+ # If we have a read lock then no other process has a write lock.
+ self.release_read()
+ except LockTimeoutError:
+ # Another process is holding a write lock on the file
+ return True
+
+ return False
+
+ def downgrade_write_to_read(self, timeout=None):
+ """
+ Downgrade from an exclusive write lock to a shared read.
+
+ Raises:
+ LockDowngradeError: if this is an attempt at a nested transaction
+ """
+ timeout = timeout or self.default_timeout
+
+ if self._writes == 1 and self._reads == 0:
+ self._log_downgrading()
+ # can raise LockError.
+ wait_time, nattempts = self._lock(fcntl.LOCK_SH, timeout=timeout)
+ self._reads = 1
+ self._writes = 0
+ self._log_downgraded(wait_time, nattempts)
+ else:
+ raise LockDowngradeError(self.path)
+
+ def upgrade_read_to_write(self, timeout=None):
+ """
+ Attempts to upgrade from a shared read lock to an exclusive write.
+
+ Raises:
+ LockUpgradeError: if this is an attempt at a nested transaction
+ """
+ timeout = timeout or self.default_timeout
+
+ if self._reads == 1 and self._writes == 0:
+ self._log_upgrading()
+ # can raise LockError.
+ wait_time, nattempts = self._lock(fcntl.LOCK_EX, timeout=timeout)
+ self._reads = 0
+ self._writes = 1
+ self._log_upgraded(wait_time, nattempts)
+ else:
+ raise LockUpgradeError(self.path)
+
def release_read(self, release_fn=None):
"""Releases a read lock.
@@ -305,17 +416,17 @@ class Lock(object):
"""
assert self._reads > 0
+ locktype = 'READ LOCK'
if self._reads == 1 and self._writes == 0:
- self._debug(
- 'READ LOCK: {0.path}[{0._start}:{0._length}] [Released]'
- .format(self))
+ self._log_releasing(locktype)
- result = True
- if release_fn is not None:
- result = release_fn()
+ # we need to call release_fn before releasing the lock
+ release_fn = release_fn or true_fn
+ result = release_fn()
self._unlock() # can raise LockError.
- self._reads -= 1
+ self._reads = 0
+ self._log_released(locktype)
return result
else:
self._reads -= 1
@@ -339,45 +450,91 @@ class Lock(object):
"""
assert self._writes > 0
+ release_fn = release_fn or true_fn
+ locktype = 'WRITE LOCK'
if self._writes == 1 and self._reads == 0:
- self._debug(
- 'WRITE LOCK: {0.path}[{0._start}:{0._length}] [Released]'
- .format(self))
+ self._log_releasing(locktype)
# we need to call release_fn before releasing the lock
- result = True
- if release_fn is not None:
- result = release_fn()
+ result = release_fn()
self._unlock() # can raise LockError.
- self._writes -= 1
+ self._writes = 0
+ self._log_released(locktype)
return result
-
else:
self._writes -= 1
# when the last *write* is released, we call release_fn here
# instead of immediately before releasing the lock.
if self._writes == 0:
- return release_fn() if release_fn is not None else True
+ return release_fn()
else:
return False
def _debug(self, *args):
tty.debug(*args)
- def _acquired_debug(self, lock_type, wait_time, nattempts):
- attempts_format = 'attempt' if nattempts == 1 else 'attempt'
- if nattempts > 1:
- acquired_attempts_format = ' after {0:0.2f}s and {1:d} {2}'.format(
- wait_time, nattempts, attempts_format)
- else:
- # Dont print anything if we succeeded immediately
- acquired_attempts_format = ''
- self._debug(
- '{0}: {1.path}[{1._start}:{1._length}] [Acquired{2}]'
- .format(lock_type, self, acquired_attempts_format))
+ def _get_counts_desc(self):
+ return '(reads {0}, writes {1})'.format(self._reads, self._writes) \
+ if tty.is_verbose() else ''
+
+ def _log_acquired(self, locktype, wait_time, nattempts):
+ attempts_part = _attempts_str(wait_time, nattempts)
+ now = datetime.now()
+ desc = 'Acquired at %s' % now.strftime("%H:%M:%S.%f")
+ self._debug(self._status_msg(locktype, '{0}{1}'.
+ format(desc, attempts_part)))
+
+ def _log_acquiring(self, locktype):
+ self._debug2(self._status_msg(locktype, 'Acquiring'))
+
+ def _log_downgraded(self, wait_time, nattempts):
+ attempts_part = _attempts_str(wait_time, nattempts)
+ now = datetime.now()
+ desc = 'Downgraded at %s' % now.strftime("%H:%M:%S.%f")
+ self._debug(self._status_msg('READ LOCK', '{0}{1}'
+ .format(desc, attempts_part)))
+
+ def _log_downgrading(self):
+ self._debug2(self._status_msg('WRITE LOCK', 'Downgrading'))
+
+ def _log_released(self, locktype):
+ now = datetime.now()
+ desc = 'Released at %s' % now.strftime("%H:%M:%S.%f")
+ self._debug(self._status_msg(locktype, desc))
+
+ def _log_releasing(self, locktype):
+ self._debug2(self._status_msg(locktype, 'Releasing'))
+
+ def _log_upgraded(self, wait_time, nattempts):
+ attempts_part = _attempts_str(wait_time, nattempts)
+ now = datetime.now()
+ desc = 'Upgraded at %s' % now.strftime("%H:%M:%S.%f")
+ self._debug(self._status_msg('WRITE LOCK', '{0}{1}'.
+ format(desc, attempts_part)))
+
+ def _log_upgrading(self):
+ self._debug2(self._status_msg('READ LOCK', 'Upgrading'))
+
+ def _status_msg(self, locktype, status):
+ status_desc = '[{0}] {1}'.format(status, self._get_counts_desc())
+ return '{0}{1.desc}: {1.path}[{1._start}:{1._length}] {2}'.format(
+ locktype, self, status_desc)
+
+ def _debug2(self, *args):
+ # TODO: Easy place to make a single, temporary change to the
+ # TODO: debug level associated with the more detailed messages.
+ # TODO:
+ # TODO: Someday it would be great if we could switch this to
+ # TODO: another level, perhaps _between_ debug and verbose, or
+ # TODO: some other form of filtering so the first level of
+ # TODO: debugging doesn't have to generate these messages. Using
+ # TODO: verbose here did not work as expected because tests like
+ # TODO: test_spec_json will write the verbose messages to the
+ # TODO: output that is used to check test correctness.
+ tty.debug(*args)
class LockTransaction(object):
@@ -462,10 +619,28 @@ class LockError(Exception):
"""Raised for any errors related to locks."""
+class LockDowngradeError(LockError):
+ """Raised when unable to downgrade from a write to a read lock."""
+ def __init__(self, path):
+ msg = "Cannot downgrade lock from write to read on file: %s" % path
+ super(LockDowngradeError, self).__init__(msg)
+
+
+class LockLimitError(LockError):
+ """Raised when exceed maximum attempts to acquire a lock."""
+
+
class LockTimeoutError(LockError):
"""Raised when an attempt to acquire a lock times out."""
+class LockUpgradeError(LockError):
+ """Raised when unable to upgrade from a read to a write lock."""
+ def __init__(self, path):
+ msg = "Cannot upgrade lock from read to write on file: %s" % path
+ super(LockUpgradeError, self).__init__(msg)
+
+
class LockPermissionError(LockError):
"""Raised when there are permission issues with a lock."""
diff --git a/lib/spack/llnl/util/tty/__init__.py b/lib/spack/llnl/util/tty/__init__.py
index 668e881b2a..41eef5d284 100644
--- a/lib/spack/llnl/util/tty/__init__.py
+++ b/lib/spack/llnl/util/tty/__init__.py
@@ -135,7 +135,9 @@ def process_stacktrace(countback):
def get_timestamp(force=False):
"""Get a string timestamp"""
if _debug or _timestamp or force:
- return datetime.now().strftime("[%Y-%m-%d-%H:%M:%S.%f] ")
+ # Note inclusion of the PID is useful for parallel builds.
+ return '[{0}, {1}] '.format(
+ datetime.now().strftime("%Y-%m-%d-%H:%M:%S.%f"), os.getpid())
else:
return ''
diff --git a/lib/spack/spack/cmd/install.py b/lib/spack/spack/cmd/install.py
index 9e13643920..e2a6327f5f 100644
--- a/lib/spack/spack/cmd/install.py
+++ b/lib/spack/spack/cmd/install.py
@@ -47,7 +47,7 @@ def update_kwargs_from_args(args, kwargs):
})
kwargs.update({
- 'install_dependencies': ('dependencies' in args.things_to_install),
+ 'install_deps': ('dependencies' in args.things_to_install),
'install_package': ('package' in args.things_to_install)
})
diff --git a/lib/spack/spack/database.py b/lib/spack/spack/database.py
index 65b85e026b..e04a1f292a 100644
--- a/lib/spack/spack/database.py
+++ b/lib/spack/spack/database.py
@@ -37,6 +37,7 @@ from llnl.util.filesystem import mkdirp
import spack.store
import spack.repo
import spack.spec
+import spack.util.lock as lk
import spack.util.spack_yaml as syaml
import spack.util.spack_json as sjson
from spack.filesystem_view import YamlFilesystemView
@@ -44,7 +45,9 @@ from spack.util.crypto import bit_length
from spack.directory_layout import DirectoryLayoutError
from spack.error import SpackError
from spack.version import Version
-from spack.util.lock import Lock, WriteTransaction, ReadTransaction, LockError
+
+# TODO: Provide an API automatically retyring a build after detecting and
+# TODO: clearing a failure.
# DB goes in this directory underneath the root
_db_dirname = '.spack-db'
@@ -65,9 +68,20 @@ _skip_reindex = [
(Version('0.9.3'), Version('5')),
]
-# Timeout for spack database locks in seconds
+# Default timeout for spack database locks in seconds or None (no timeout).
+# A balance needs to be struck between quick turnaround for parallel installs
+# (to avoid excess delays) and waiting long enough when the system is busy
+# (to ensure the database is updated).
_db_lock_timeout = 120
+# Default timeout for spack package locks in seconds or None (no timeout).
+# A balance needs to be struck between quick turnaround for parallel installs
+# (to avoid excess delays when performing a parallel installation) and waiting
+# long enough for the next possible spec to install (to avoid excessive
+# checking of the last high priority package) or holding on to a lock (to
+# ensure a failed install is properly tracked).
+_pkg_lock_timeout = None
+
# Types of dependencies tracked by the database
_tracked_deps = ('link', 'run')
@@ -255,6 +269,9 @@ class Database(object):
"""Per-process lock objects for each install prefix."""
_prefix_locks = {}
+ """Per-process failure (lock) objects for each install prefix."""
+ _prefix_failures = {}
+
def __init__(self, root, db_dir=None, upstream_dbs=None,
is_upstream=False):
"""Create a Database for Spack installations under ``root``.
@@ -295,17 +312,29 @@ class Database(object):
# This is for other classes to use to lock prefix directories.
self.prefix_lock_path = os.path.join(self._db_dir, 'prefix_lock')
+ # Ensure a persistent location for dealing with parallel installation
+ # failures (e.g., across near-concurrent processes).
+ self._failure_dir = os.path.join(self._db_dir, 'failures')
+
+ # Support special locks for handling parallel installation failures
+ # of a spec.
+ self.prefix_fail_path = os.path.join(self._db_dir, 'prefix_failures')
+
# Create needed directories and files
if not os.path.exists(self._db_dir):
mkdirp(self._db_dir)
+ if not os.path.exists(self._failure_dir):
+ mkdirp(self._failure_dir)
+
self.is_upstream = is_upstream
# initialize rest of state.
self.db_lock_timeout = (
spack.config.get('config:db_lock_timeout') or _db_lock_timeout)
self.package_lock_timeout = (
- spack.config.get('config:package_lock_timeout') or None)
+ spack.config.get('config:package_lock_timeout') or
+ _pkg_lock_timeout)
tty.debug('DATABASE LOCK TIMEOUT: {0}s'.format(
str(self.db_lock_timeout)))
timeout_format_str = ('{0}s'.format(str(self.package_lock_timeout))
@@ -316,8 +345,9 @@ class Database(object):
if self.is_upstream:
self.lock = ForbiddenLock()
else:
- self.lock = Lock(self._lock_path,
- default_timeout=self.db_lock_timeout)
+ self.lock = lk.Lock(self._lock_path,
+ default_timeout=self.db_lock_timeout,
+ desc='database')
self._data = {}
self.upstream_dbs = list(upstream_dbs) if upstream_dbs else []
@@ -332,14 +362,136 @@ class Database(object):
def write_transaction(self):
"""Get a write lock context manager for use in a `with` block."""
- return WriteTransaction(
+ return lk.WriteTransaction(
self.lock, acquire=self._read, release=self._write)
def read_transaction(self):
"""Get a read lock context manager for use in a `with` block."""
- return ReadTransaction(self.lock, acquire=self._read)
+ return lk.ReadTransaction(self.lock, acquire=self._read)
+
+ def _failed_spec_path(self, spec):
+ """Return the path to the spec's failure file, which may not exist."""
+ if not spec.concrete:
+ raise ValueError('Concrete spec required for failure path for {0}'
+ .format(spec.name))
- def prefix_lock(self, spec):
+ return os.path.join(self._failure_dir,
+ '{0}-{1}'.format(spec.name, spec.full_hash()))
+
+ def clear_failure(self, spec, force=False):
+ """
+ Remove any persistent and cached failure tracking for the spec.
+
+ see `mark_failed()`.
+
+ Args:
+ spec (Spec): the spec whose failure indicators are being removed
+ force (bool): True if the failure information should be cleared
+ when a prefix failure lock exists for the file or False if
+ the failure should not be cleared (e.g., it may be
+ associated with a concurrent build)
+
+ """
+ failure_locked = self.prefix_failure_locked(spec)
+ if failure_locked and not force:
+ tty.msg('Retaining failure marking for {0} due to lock'
+ .format(spec.name))
+ return
+
+ if failure_locked:
+ tty.warn('Removing failure marking despite lock for {0}'
+ .format(spec.name))
+
+ lock = self._prefix_failures.pop(spec.prefix, None)
+ if lock:
+ lock.release_write()
+
+ if self.prefix_failure_marked(spec):
+ try:
+ path = self._failed_spec_path(spec)
+ tty.debug('Removing failure marking for {0}'.format(spec.name))
+ os.remove(path)
+ except OSError as err:
+ tty.warn('Unable to remove failure marking for {0} ({1}): {2}'
+ .format(spec.name, path, str(err)))
+
+ def mark_failed(self, spec):
+ """
+ Mark a spec as failing to install.
+
+ Prefix failure marking takes the form of a byte range lock on the nth
+ byte of a file for coordinating between concurrent parallel build
+ processes and a persistent file, named with the full hash and
+ containing the spec, in a subdirectory of the database to enable
+ persistence across overlapping but separate related build processes.
+
+ The failure lock file, ``spack.store.db.prefix_failures``, lives
+ alongside the install DB. ``n`` is the sys.maxsize-bit prefix of the
+ associated DAG hash to make the likelihood of collision very low with
+ no cleanup required.
+ """
+ # Dump the spec to the failure file for (manual) debugging purposes
+ path = self._failed_spec_path(spec)
+ with open(path, 'w') as f:
+ spec.to_json(f)
+
+ # Also ensure a failure lock is taken to prevent cleanup removal
+ # of failure status information during a concurrent parallel build.
+ err = 'Unable to mark {0.name} as failed.'
+
+ prefix = spec.prefix
+ if prefix not in self._prefix_failures:
+ mark = lk.Lock(
+ self.prefix_fail_path,
+ start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
+ length=1,
+ default_timeout=self.package_lock_timeout, desc=spec.name)
+
+ try:
+ mark.acquire_write()
+ except lk.LockTimeoutError:
+ # Unlikely that another process failed to install at the same
+ # time but log it anyway.
+ tty.debug('PID {0} failed to mark install failure for {1}'
+ .format(os.getpid(), spec.name))
+ tty.warn(err.format(spec))
+
+ # Whether we or another process marked it as a failure, track it
+ # as such locally.
+ self._prefix_failures[prefix] = mark
+
+ return self._prefix_failures[prefix]
+
+ def prefix_failed(self, spec):
+ """Return True if the prefix (installation) is marked as failed."""
+ # The failure was detected in this process.
+ if spec.prefix in self._prefix_failures:
+ return True
+
+ # The failure was detected by a concurrent process (e.g., an srun),
+ # which is expected to be holding a write lock if that is the case.
+ if self.prefix_failure_locked(spec):
+ return True
+
+ # Determine if the spec may have been marked as failed by a separate
+ # spack build process running concurrently.
+ return self.prefix_failure_marked(spec)
+
+ def prefix_failure_locked(self, spec):
+ """Return True if a process has a failure lock on the spec."""
+ check = lk.Lock(
+ self.prefix_fail_path,
+ start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
+ length=1,
+ default_timeout=self.package_lock_timeout, desc=spec.name)
+
+ return check.is_write_locked()
+
+ def prefix_failure_marked(self, spec):
+ """Determine if the spec has a persistent failure marking."""
+ return os.path.exists(self._failed_spec_path(spec))
+
+ def prefix_lock(self, spec, timeout=None):
"""Get a lock on a particular spec's installation directory.
NOTE: The installation directory **does not** need to exist.
@@ -354,13 +506,16 @@ class Database(object):
readers-writer lock semantics with just a single lockfile, so no
cleanup required.
"""
+ timeout = timeout or self.package_lock_timeout
prefix = spec.prefix
if prefix not in self._prefix_locks:
- self._prefix_locks[prefix] = Lock(
+ self._prefix_locks[prefix] = lk.Lock(
self.prefix_lock_path,
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
length=1,
- default_timeout=self.package_lock_timeout)
+ default_timeout=timeout, desc=spec.name)
+ elif timeout != self._prefix_locks[prefix].default_timeout:
+ self._prefix_locks[prefix].default_timeout = timeout
return self._prefix_locks[prefix]
@@ -371,7 +526,7 @@ class Database(object):
try:
yield self
- except LockError:
+ except lk.LockError:
# This addresses the case where a nested lock attempt fails inside
# of this context manager
raise
@@ -388,7 +543,7 @@ class Database(object):
try:
yield self
- except LockError:
+ except lk.LockError:
# This addresses the case where a nested lock attempt fails inside
# of this context manager
raise
@@ -624,7 +779,7 @@ class Database(object):
self._error = e
self._data = {}
- transaction = WriteTransaction(
+ transaction = lk.WriteTransaction(
self.lock, acquire=_read_suppress_error, release=self._write
)
@@ -810,7 +965,7 @@ class Database(object):
self._db_dir, os.R_OK | os.W_OK):
# if we can write, then read AND write a JSON file.
self._read_from_file(self._old_yaml_index_path, format='yaml')
- with WriteTransaction(self.lock):
+ with lk.WriteTransaction(self.lock):
self._write(None, None, None)
else:
# Read chck for a YAML file if we can't find JSON.
@@ -823,7 +978,7 @@ class Database(object):
" databases cannot generate an index file")
# The file doesn't exist, try to traverse the directory.
# reindex() takes its own write lock, so no lock here.
- with WriteTransaction(self.lock):
+ with lk.WriteTransaction(self.lock):
self._write(None, None, None)
self.reindex(spack.store.layout)
diff --git a/lib/spack/spack/installer.py b/lib/spack/spack/installer.py
new file mode 100755
index 0000000000..6e772fbb66
--- /dev/null
+++ b/lib/spack/spack/installer.py
@@ -0,0 +1,1597 @@
+# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
+# Spack Project Developers. See the top-level COPYRIGHT file for details.
+#
+# SPDX-License-Identifier: (Apache-2.0 OR MIT)
+
+"""
+This module encapsulates package installation functionality.
+
+The PackageInstaller coordinates concurrent builds of packages for the same
+Spack instance by leveraging the dependency DAG and file system locks. It
+also proceeds with the installation of non-dependent packages of failed
+dependencies in order to install as many dependencies of a package as possible.
+
+Bottom-up traversal of the dependency DAG while prioritizing packages with no
+uninstalled dependencies allows multiple processes to perform concurrent builds
+of separate packages associated with a spec.
+
+File system locks enable coordination such that no two processes attempt to
+build the same or a failed dependency package.
+
+Failures to install dependency packages result in removal of their dependents'
+build tasks from the current process. A failure file is also written (and
+locked) so that other processes can detect the failure and adjust their build
+tasks accordingly.
+
+This module supports the coordination of local and distributed concurrent
+installations of packages in a Spack instance.
+"""
+
+import glob
+import heapq
+import itertools
+import os
+import shutil
+import six
+import sys
+import time
+
+import llnl.util.lock as lk
+import llnl.util.tty as tty
+import spack.binary_distribution as binary_distribution
+import spack.compilers
+import spack.error
+import spack.hooks
+import spack.package
+import spack.repo
+import spack.store
+
+from llnl.util.filesystem import \
+ chgrp, install, install_tree, mkdirp, touch, working_dir
+from llnl.util.tty.color import colorize, cwrite
+from llnl.util.tty.log import log_output
+from spack.package_prefs import get_package_dir_permissions, get_package_group
+from spack.util.environment import dump_environment
+from spack.util.executable import which
+
+
+#: Counter to support unique spec sequencing that is used to ensure packages
+#: with the same priority are (initially) processed in the order in which they
+#: were added (see https://docs.python.org/2/library/heapq.html).
+_counter = itertools.count(0)
+
+#: Build status indicating task has been added.
+STATUS_ADDED = 'queued'
+
+#: Build status indicating the spec failed to install
+STATUS_FAILED = 'failed'
+
+#: Build status indicating the spec is being installed (possibly by another
+#: process)
+STATUS_INSTALLING = 'installing'
+
+#: Build status indicating the spec was sucessfully installed
+STATUS_INSTALLED = 'installed'
+
+#: Build status indicating the task has been popped from the queue
+STATUS_DEQUEUED = 'dequeued'
+
+#: Build status indicating task has been removed (to maintain priority
+#: queue invariants).
+STATUS_REMOVED = 'removed'
+
+
+def _handle_external_and_upstream(pkg, explicit):
+ """
+ Determine if the package is external or upstream and register it in the
+ database if it is external package.
+
+ Args:
+ pkg (Package): the package whose installation is under consideration
+ explicit (bool): the package was explicitly requested by the user
+ Return:
+ (bool): ``True`` if the package is external or upstream (so not to
+ be installed locally), otherwise, ``True``
+ """
+ # For external packages the workflow is simplified, and basically
+ # consists in module file generation and registration in the DB.
+ if pkg.spec.external:
+ _process_external_package(pkg, explicit)
+ return True
+
+ if pkg.installed_upstream:
+ tty.verbose('{0} is installed in an upstream Spack instance at {1}'
+ .format(package_id(pkg), pkg.spec.prefix))
+ _print_installed_pkg(pkg.prefix)
+
+ # This will result in skipping all post-install hooks. In the case
+ # of modules this is considered correct because we want to retrieve
+ # the module from the upstream Spack instance.
+ return True
+
+ return False
+
+
+def _do_fake_install(pkg):
+ """
+ Make a fake install directory containing fake executables, headers,
+ and libraries.
+
+ Args:
+ pkg (PackageBase): the package whose installation is to be faked
+ """
+
+ command = pkg.name
+ header = pkg.name
+ library = pkg.name
+
+ # Avoid double 'lib' for packages whose names already start with lib
+ if not pkg.name.startswith('lib'):
+ library = 'lib' + library
+
+ dso_suffix = '.dylib' if sys.platform == 'darwin' else '.so'
+ chmod = which('chmod')
+
+ # Install fake command
+ mkdirp(pkg.prefix.bin)
+ touch(os.path.join(pkg.prefix.bin, command))
+ chmod('+x', os.path.join(pkg.prefix.bin, command))
+
+ # Install fake header file
+ mkdirp(pkg.prefix.include)
+ touch(os.path.join(pkg.prefix.include, header + '.h'))
+
+ # Install fake shared and static libraries
+ mkdirp(pkg.prefix.lib)
+ for suffix in [dso_suffix, '.a']:
+ touch(os.path.join(pkg.prefix.lib, library + suffix))
+
+ # Install fake man page
+ mkdirp(pkg.prefix.man.man1)
+
+ packages_dir = spack.store.layout.build_packages_path(pkg.spec)
+ dump_packages(pkg.spec, packages_dir)
+
+
+def _packages_needed_to_bootstrap_compiler(pkg):
+ """
+ Return a list of packages required to bootstrap `pkg`s compiler
+
+ Checks Spack's compiler configuration for a compiler that
+ matches the package spec.
+
+ Args:
+ pkg (Package): the package that may need its compiler installed
+
+ Return:
+ (list) list of tuples, (PackageBase, bool), for concretized compiler-
+ -related packages that need to be installed and bool values
+ specify whether the package is the bootstrap compiler
+ (``True``) or one of its dependencies (``False``). The list
+ will be empty if there are no compilers.
+ """
+ tty.debug('Bootstrapping {0} compiler for {1}'
+ .format(pkg.spec.compiler, package_id(pkg)))
+ compilers = spack.compilers.compilers_for_spec(
+ pkg.spec.compiler, arch_spec=pkg.spec.architecture)
+ if compilers:
+ return []
+
+ dep = spack.compilers.pkg_spec_for_compiler(pkg.spec.compiler)
+ dep.architecture = pkg.spec.architecture
+ # concrete CompilerSpec has less info than concrete Spec
+ # concretize as Spec to add that information
+ dep.concretize()
+ packages = [(s.package, False) for
+ s in dep.traverse(order='post', root=False)]
+ packages.append((dep.package, True))
+ return packages
+
+
+def _hms(seconds):
+ """
+ Convert seconds to hours, minutes, seconds
+
+ Args:
+ seconds (int): time to be converted in seconds
+
+ Return:
+ (str) String representation of the time as #h #m #.##s
+ """
+ m, s = divmod(seconds, 60)
+ h, m = divmod(m, 60)
+
+ parts = []
+ if h:
+ parts.append("%dh" % h)
+ if m:
+ parts.append("%dm" % m)
+ if s:
+ parts.append("%.2fs" % s)
+ return ' '.join(parts)
+
+
+def _install_from_cache(pkg, cache_only, explicit):
+ """
+ Install the package from binary cache
+
+ Args:
+ pkg (PackageBase): the package to install from the binary cache
+ cache_only (bool): only install from binary cache
+ explicit (bool): ``True`` if installing the package was explicitly
+ requested by the user, otherwise, ``False``
+
+ Return:
+ (bool) ``True`` if the package was installed from binary cache,
+ ``False`` otherwise
+ """
+ installed_from_cache = _try_install_from_binary_cache(pkg, explicit)
+ pkg_id = package_id(pkg)
+ if not installed_from_cache:
+ pre = 'No binary for {0} found'.format(pkg_id)
+ if cache_only:
+ tty.die('{0} when cache-only specified'.format(pre))
+
+ tty.debug('{0}: installing from source'.format(pre))
+ return False
+
+ tty.debug('Successfully installed {0} from binary cache'.format(pkg_id))
+ _print_installed_pkg(pkg.spec.prefix)
+ spack.hooks.post_install(pkg.spec)
+ return True
+
+
+def _print_installed_pkg(message):
+ """
+ Output a message with a package icon.
+
+ Args:
+ message (str): message to be output
+ """
+ cwrite('@*g{[+]} ')
+ print(message)
+
+
+def _process_external_package(pkg, explicit):
+ """
+ Helper function to run post install hooks and register external packages.
+
+ Args:
+ pkg (Package): the external package
+ explicit (bool): if the package was requested explicitly by the user,
+ ``False`` if it was pulled in as a dependency of an explicit
+ package.
+ """
+ assert pkg.spec.external, \
+ 'Expected to post-install/register an external package.'
+
+ pre = '{s.name}@{s.version} :'.format(s=pkg.spec)
+ spec = pkg.spec
+
+ if spec.external_module:
+ tty.msg('{0} has external module in {1}'
+ .format(pre, spec.external_module))
+ tty.msg('{0} is actually installed in {1}'
+ .format(pre, spec.external_path))
+ else:
+ tty.msg("{0} externally installed in {1}"
+ .format(pre, spec.external_path))
+
+ try:
+ # Check if the package was already registered in the DB.
+ # If this is the case, then just exit.
+ rec = spack.store.db.get_record(spec)
+ tty.msg('{0} already registered in DB'.format(pre))
+
+ # Update the value of rec.explicit if it is necessary
+ _update_explicit_entry_in_db(pkg, rec, explicit)
+
+ except KeyError:
+ # If not, register it and generate the module file.
+ # For external packages we just need to run
+ # post-install hooks to generate module files.
+ tty.msg('{0} generating module file'.format(pre))
+ spack.hooks.post_install(spec)
+
+ # Add to the DB
+ tty.msg('{0} registering into DB'.format(pre))
+ spack.store.db.add(spec, None, explicit=explicit)
+
+
+def _process_binary_cache_tarball(pkg, binary_spec, explicit):
+ """
+ Process the binary cache tarball.
+
+ Args:
+ pkg (PackageBase): the package being installed
+ binary_spec (Spec): the spec whose cache has been confirmed
+ explicit (bool): the package was explicitly requested by the user
+
+ Return:
+ (bool) ``True`` if the package was installed from binary cache,
+ else ``False``
+ """
+ tarball = binary_distribution.download_tarball(binary_spec)
+ # see #10063 : install from source if tarball doesn't exist
+ if tarball is None:
+ tty.msg('{0} exists in binary cache but with different hash'
+ .format(pkg.name))
+ return False
+
+ pkg_id = package_id(pkg)
+ tty.msg('Installing {0} from binary cache'.format(pkg_id))
+ binary_distribution.extract_tarball(binary_spec, tarball, allow_root=False,
+ unsigned=False, force=False)
+ pkg.installed_from_binary_cache = True
+ spack.store.db.add(pkg.spec, spack.store.layout, explicit=explicit)
+ return True
+
+
+def _try_install_from_binary_cache(pkg, explicit):
+ """
+ Try to install the package from binary cache.
+
+ Args:
+ pkg (PackageBase): the package to be installed from binary cache
+ explicit (bool): the package was explicitly requested by the user
+ """
+ pkg_id = package_id(pkg)
+ tty.debug('Searching for binary cache of {0}'.format(pkg_id))
+ specs = binary_distribution.get_specs()
+ binary_spec = spack.spec.Spec.from_dict(pkg.spec.to_dict())
+ binary_spec._mark_concrete()
+ if binary_spec not in specs:
+ return False
+
+ return _process_binary_cache_tarball(pkg, binary_spec, explicit)
+
+
+def _update_explicit_entry_in_db(pkg, rec, explicit):
+ """
+ Ensure the spec is marked explicit in the database.
+
+ Args:
+ pkg (Package): the package whose install record is being updated
+ rec (InstallRecord): the external package
+ explicit (bool): if the package was requested explicitly by the user,
+ ``False`` if it was pulled in as a dependency of an explicit
+ package.
+ """
+ if explicit and not rec.explicit:
+ with spack.store.db.write_transaction():
+ rec = spack.store.db.get_record(pkg.spec)
+ message = '{s.name}@{s.version} : marking the package explicit'
+ tty.msg(message.format(s=pkg.spec))
+ rec.explicit = True
+
+
+def dump_packages(spec, path):
+ """
+ Dump all package information for a spec and its dependencies.
+
+ This creates a package repository within path for every namespace in the
+ spec DAG, and fills the repos wtih package files and patch files for every
+ node in the DAG.
+
+ Args:
+ spec (Spec): the Spack spec whose package information is to be dumped
+ path (str): the path to the build packages directory
+ """
+ mkdirp(path)
+
+ # Copy in package.py files from any dependencies.
+ # Note that we copy them in as they are in the *install* directory
+ # NOT as they are in the repository, because we want a snapshot of
+ # how *this* particular build was done.
+ for node in spec.traverse(deptype=all):
+ if node is not spec:
+ # Locate the dependency package in the install tree and find
+ # its provenance information.
+ source = spack.store.layout.build_packages_path(node)
+ source_repo_root = os.path.join(source, node.namespace)
+
+ # There's no provenance installed for the source package. Skip it.
+ # User can always get something current from the builtin repo.
+ if not os.path.isdir(source_repo_root):
+ continue
+
+ # Create a source repo and get the pkg directory out of it.
+ try:
+ source_repo = spack.repo.Repo(source_repo_root)
+ source_pkg_dir = source_repo.dirname_for_package_name(
+ node.name)
+ except spack.repo.RepoError:
+ tty.warn("Warning: Couldn't copy in provenance for {0}"
+ .format(node.name))
+
+ # Create a destination repository
+ dest_repo_root = os.path.join(path, node.namespace)
+ if not os.path.exists(dest_repo_root):
+ spack.repo.create_repo(dest_repo_root)
+ repo = spack.repo.Repo(dest_repo_root)
+
+ # Get the location of the package in the dest repo.
+ dest_pkg_dir = repo.dirname_for_package_name(node.name)
+ if node is not spec:
+ install_tree(source_pkg_dir, dest_pkg_dir)
+ else:
+ spack.repo.path.dump_provenance(node, dest_pkg_dir)
+
+
+def install_msg(name, pid):
+ """
+ Colorize the name/id of the package being installed
+
+ Args:
+ name (str): Name/id of the package being installed
+ pid (id): id of the installer process
+
+ Return:
+ (str) Colorized installing message
+ """
+ return '{0}: '.format(pid) + colorize('@*{Installing} @*g{%s}' % name)
+
+
+def log(pkg):
+ """
+ Copy provenance into the install directory on success
+
+ Args:
+ pkg (Package): the package that was installed and built
+ """
+ packages_dir = spack.store.layout.build_packages_path(pkg.spec)
+
+ # Remove first if we're overwriting another build
+ # (can happen with spack setup)
+ try:
+ # log and env install paths are inside this
+ shutil.rmtree(packages_dir)
+ except Exception as e:
+ # FIXME : this potentially catches too many things...
+ tty.debug(e)
+
+ # Archive the whole stdout + stderr for the package
+ install(pkg.log_path, pkg.install_log_path)
+
+ # Archive the environment used for the build
+ install(pkg.env_path, pkg.install_env_path)
+
+ # Finally, archive files that are specific to each package
+ with working_dir(pkg.stage.path):
+ errors = six.StringIO()
+ target_dir = os.path.join(
+ spack.store.layout.metadata_path(pkg.spec), 'archived-files')
+
+ for glob_expr in pkg.archive_files:
+ # Check that we are trying to copy things that are
+ # in the stage tree (not arbitrary files)
+ abs_expr = os.path.realpath(glob_expr)
+ if os.path.realpath(pkg.stage.path) not in abs_expr:
+ errors.write('[OUTSIDE SOURCE PATH]: {0}\n'.format(glob_expr))
+ continue
+ # Now that we are sure that the path is within the correct
+ # folder, make it relative and check for matches
+ if os.path.isabs(glob_expr):
+ glob_expr = os.path.relpath(glob_expr, pkg.stage.path)
+ files = glob.glob(glob_expr)
+ for f in files:
+ try:
+ target = os.path.join(target_dir, f)
+ # We must ensure that the directory exists before
+ # copying a file in
+ mkdirp(os.path.dirname(target))
+ install(f, target)
+ except Exception as e:
+ tty.debug(e)
+
+ # Here try to be conservative, and avoid discarding
+ # the whole install procedure because of copying a
+ # single file failed
+ errors.write('[FAILED TO ARCHIVE]: {0}'.format(f))
+
+ if errors.getvalue():
+ error_file = os.path.join(target_dir, 'errors.txt')
+ mkdirp(target_dir)
+ with open(error_file, 'w') as err:
+ err.write(errors.getvalue())
+ tty.warn('Errors occurred when archiving files.\n\t'
+ 'See: {0}'.format(error_file))
+
+ dump_packages(pkg.spec, packages_dir)
+
+
+def package_id(pkg):
+ """A "unique" package identifier for installation purposes
+
+ The identifier is used to track build tasks, locks, install, and
+ failure statuses.
+
+ Args:
+ pkg (PackageBase): the package from which the identifier is derived
+ """
+ if not pkg.spec.concrete:
+ raise ValueError("Cannot provide a unique, readable id when "
+ "the spec is not concretized.")
+ # TODO: Restore use of the dag hash once resolve issues with different
+ # TODO: hashes being associated with dependents of different packages
+ # TODO: within the same install, such as the hash for callpath being
+ # TODO: different for mpich and dyninst in the
+ # TODO: test_force_uninstall_and_reinstall_by_hash` test.
+
+ # TODO: Is the extra "readability" of the version worth keeping?
+ # return "{0}-{1}-{2}".format(pkg.name, pkg.version, pkg.spec.dag_hash())
+
+ # TODO: Including the version causes some installs to fail. Specifically
+ # TODO: failures occur when the version of a dependent of one of the
+ # TODO: packages does not match the version that is installed.
+ # return "{0}-{1}".format(pkg.name, pkg.version)
+
+ return pkg.name
+
+
+install_args_docstring = """
+ cache_only (bool): Fail if binary package unavailable.
+ dirty (bool): Don't clean the build environment before installing.
+ explicit (bool): True if package was explicitly installed, False
+ if package was implicitly installed (as a dependency).
+ fake (bool): Don't really build; install fake stub files instead.
+ force (bool): Install again, even if already installed.
+ install_deps (bool): Install dependencies before installing this
+ package
+ install_source (bool): By default, source is not installed, but
+ for debugging it might be useful to keep it around.
+ keep_prefix (bool): Keep install prefix on failure. By default,
+ destroys it.
+ keep_stage (bool): By default, stage is destroyed only if there
+ are no exceptions during build. Set to True to keep the stage
+ even with exceptions.
+ restage (bool): Force spack to restage the package source.
+ skip_patch (bool): Skip patch stage of build if True.
+ stop_at (InstallPhase): last installation phase to be executed
+ (or None)
+ tests (bool or list or set): False to run no tests, True to test
+ all packages, or a list of package names to run tests for some
+ use_cache (bool): Install from binary package, if available.
+ verbose (bool): Display verbose build output (by default,
+ suppresses it)
+ """
+
+
+class PackageInstaller(object):
+ '''
+ Class for managing the install process for a Spack instance based on a
+ bottom-up DAG approach.
+
+ This installer can coordinate concurrent batch and interactive, local
+ and distributed (on a shared file system) builds for the same Spack
+ instance.
+ '''
+
+ def __init__(self, pkg):
+ """
+ Initialize and set up the build specs.
+
+ Args:
+ pkg (PackageBase): the package being installed, whose spec is
+ concrete
+
+ Return:
+ (PackageInstaller) instance
+ """
+ if not isinstance(pkg, spack.package.PackageBase):
+ raise ValueError("{0} must be a package".format(str(pkg)))
+
+ if not pkg.spec.concrete:
+ raise ValueError("{0}: Can only install concrete packages."
+ .format(pkg.spec.name))
+
+ # Spec of the package to be built
+ self.pkg = pkg
+
+ # The identifier used for the explicit package being built
+ self.pkg_id = package_id(pkg)
+
+ # Priority queue of build tasks
+ self.build_pq = []
+
+ # Mapping of unique package ids to build task
+ self.build_tasks = {}
+
+ # Cache of package locks for failed packages, keyed on package's ids
+ self.failed = {}
+
+ # Cache the PID for distributed build messaging
+ self.pid = os.getpid()
+
+ # Cache of installed packages' unique ids
+ self.installed = set()
+
+ # Data store layout
+ self.layout = spack.store.layout
+
+ # Locks on specs being built, keyed on the package's unique id
+ self.locks = {}
+
+ def __repr__(self):
+ """Returns a formal representation of the package installer."""
+ rep = '{0}('.format(self.__class__.__name__)
+ for attr, value in self.__dict__.items():
+ rep += '{0}={1}, '.format(attr, value.__repr__())
+ return '{0})'.format(rep.strip(', '))
+
+ def __str__(self):
+ """Returns a printable version of the package installer."""
+ tasks = '#tasks={0}'.format(len(self.build_tasks))
+ failed = 'failed ({0}) = {1}'.format(len(self.failed), self.failed)
+ installed = 'installed ({0}) = {1}'.format(
+ len(self.installed), self.installed)
+ return '{0} ({1}): {2}; {3}; {4}'.format(
+ self.pkg_id, self.pid, tasks, installed, failed)
+
+ def _add_bootstrap_compilers(self, pkg):
+ """
+ Add bootstrap compilers and dependencies to the build queue.
+
+ Args:
+ pkg (PackageBase): the package with possible compiler dependencies
+ """
+ packages = _packages_needed_to_bootstrap_compiler(pkg)
+ for (comp_pkg, is_compiler) in packages:
+ if package_id(comp_pkg) not in self.build_tasks:
+ self._push_task(comp_pkg, is_compiler, 0, 0, STATUS_ADDED)
+
+ def _prepare_for_install(self, task, keep_prefix, keep_stage,
+ restage=False):
+ """
+ Check the database and leftover installation directories/files and
+ prepare for a new install attempt for an uninstalled package.
+
+ Preparation includes cleaning up installation and stage directories
+ and ensuring the database is up-to-date.
+
+ Args:
+ task (BuildTask): the build task whose associated package is
+ being checked
+ keep_prefix (bool): ``True`` if the prefix is to be kept on
+ failure, otherwise ``False``
+ keep_stage (bool): ``True`` if the stage is to be kept even if
+ there are exceptions, otherwise ``False``
+ restage (bool): ``True`` if forcing Spack to restage the package
+ source, otherwise ``False``
+ """
+ # Make sure the package is ready to be locally installed.
+ self._ensure_install_ready(task.pkg)
+
+ # Skip file system operations if we've already gone through them for
+ # this spec.
+ if task.pkg_id in self.installed:
+ # Already determined the spec has been installed
+ return
+
+ # Determine if the spec is flagged as installed in the database
+ try:
+ rec = spack.store.db.get_record(task.pkg.spec)
+ installed_in_db = rec.installed if rec else False
+ except KeyError:
+ # KeyError is raised if there is no matching spec in the database
+ # (versus no matching specs that are installed).
+ rec = None
+ installed_in_db = False
+
+ # Make sure the installation directory is in the desired state
+ # for uninstalled specs.
+ partial = False
+ if not installed_in_db and os.path.isdir(task.pkg.spec.prefix):
+ if not keep_prefix:
+ task.pkg.remove_prefix()
+ else:
+ tty.debug('{0} is partially installed'
+ .format(task.pkg_id))
+ partial = True
+
+ # Destroy the stage for a locally installed, non-DIYStage, package
+ if restage and task.pkg.stage.managed_by_spack:
+ task.pkg.stage.destroy()
+
+ if not partial and self.layout.check_installed(task.pkg.spec):
+ self._update_installed(task)
+
+ # Only update the explicit entry once for the explicit package
+ if task.pkg_id == self.pkg_id:
+ _update_explicit_entry_in_db(task.pkg, rec, True)
+
+ # In case the stage directory has already been created, this
+ # check ensures it is removed after we checked that the spec is
+ # installed.
+ if not keep_stage:
+ task.pkg.stage.destroy()
+
+ def _check_last_phase(self, **kwargs):
+ """
+ Ensures the package being installed has a valid last phase before
+ proceeding with the installation.
+
+ The ``stop_at`` argument is removed from the installation arguments.
+
+ Args:
+ kwargs:
+ ``stop_at``': last installation phase to be executed (or None)
+ """
+ self.pkg.last_phase = kwargs.pop('stop_at', None)
+ if self.pkg.last_phase is not None and \
+ self.pkg.last_phase not in self.pkg.phases:
+ tty.die('\'{0}\' is not an allowed phase for package {1}'
+ .format(self.pkg.last_phase, self.pkg.name))
+
+ def _cleanup_all_tasks(self):
+ """Cleanup all build tasks to include releasing their locks."""
+ for pkg_id in self.locks:
+ self._release_lock(pkg_id)
+
+ for pkg_id in self.failed:
+ self._cleanup_failed(pkg_id)
+
+ ids = list(self.build_tasks)
+ for pkg_id in ids:
+ try:
+ self._remove_task(pkg_id)
+ except Exception:
+ pass
+
+ def _cleanup_failed(self, pkg_id):
+ """
+ Cleanup any failed markers for the package
+
+ Args:
+ pkg_id (str): identifier for the failed package
+ """
+ lock = self.failed.get(pkg_id, None)
+ if lock is not None:
+ err = "{0} exception when removing failure mark for {1}: {2}"
+ msg = 'Removing failure mark on {0}'
+ try:
+ tty.verbose(msg.format(pkg_id))
+ lock.release_write()
+ except Exception as exc:
+ tty.warn(err.format(exc.__class__.__name__, pkg_id, str(exc)))
+
+ def _cleanup_task(self, pkg):
+ """
+ Cleanup the build task for the spec
+
+ Args:
+ pkg (PackageBase): the package being installed
+ """
+ self._remove_task(package_id(pkg))
+
+ # Ensure we have a read lock to prevent others from uninstalling the
+ # spec during our installation.
+ self._ensure_locked('read', pkg)
+
+ def _ensure_install_ready(self, pkg):
+ """
+ Ensure the package is ready to install locally, which includes
+ already locked.
+
+ Args:
+ pkg (PackageBase): the package being locally installed
+ """
+ pkg_id = package_id(pkg)
+ pre = "{0} cannot be installed locally:".format(pkg_id)
+
+ # External packages cannot be installed locally.
+ if pkg.spec.external:
+ raise ExternalPackageError('{0} {1}'.format(pre, 'is external'))
+
+ # Upstream packages cannot be installed locally.
+ if pkg.installed_upstream:
+ raise UpstreamPackageError('{0} {1}'.format(pre, 'is upstream'))
+
+ # The package must have a prefix lock at this stage.
+ if pkg_id not in self.locks:
+ raise InstallLockError('{0} {1}'.format(pre, 'not locked'))
+
+ def _ensure_locked(self, lock_type, pkg):
+ """
+ Add a prefix lock of the specified type for the package spec
+
+ If the lock exists, then adjust accordingly. That is, read locks
+ will be upgraded to write locks if a write lock is requested and
+ write locks will be downgraded to read locks if a read lock is
+ requested.
+
+ The lock timeout for write locks is deliberately near zero seconds in
+ order to ensure the current process proceeds as quickly as possible to
+ the next spec.
+
+ Args:
+ lock_type (str): 'read' for a read lock, 'write' for a write lock
+ pkg (PackageBase): the package whose spec is being installed
+
+ Return:
+ (lock_type, lock) tuple where lock will be None if it could not
+ be obtained
+ """
+ assert lock_type in ['read', 'write'], \
+ '"{0}" is not a supported package management lock type' \
+ .format(lock_type)
+
+ pkg_id = package_id(pkg)
+ ltype, lock = self.locks.get(pkg_id, (lock_type, None))
+ if lock and ltype == lock_type:
+ return ltype, lock
+
+ desc = '{0} lock'.format(lock_type)
+ msg = '{0} a {1} on {2} with timeout {3}'
+ err = 'Failed to {0} a {1} for {2} due to {3}: {4}'
+
+ if lock_type == 'read':
+ # Wait until the other process finishes if there are no more
+ # build tasks with priority 0 (i.e., with no uninstalled
+ # dependencies).
+ no_p0 = len(self.build_tasks) == 0 or not self._next_is_pri0()
+ timeout = None if no_p0 else 3
+ else:
+ timeout = 1e-9 # Near 0 to iterate through install specs quickly
+
+ try:
+ if lock is None:
+ tty.debug(msg.format('Acquiring', desc, pkg_id, timeout))
+ op = 'acquire'
+ lock = spack.store.db.prefix_lock(pkg.spec, timeout)
+ if timeout != lock.default_timeout:
+ tty.warn('Expected prefix lock timeout {0}, not {1}'
+ .format(timeout, lock.default_timeout))
+ if lock_type == 'read':
+ lock.acquire_read()
+ else:
+ lock.acquire_write()
+
+ elif lock_type == 'read': # write -> read
+ # Only get here if the current lock is a write lock, which
+ # must be downgraded to be a read lock
+ # Retain the original lock timeout, which is in the lock's
+ # default_timeout setting.
+ tty.debug(msg.format('Downgrading to', desc, pkg_id,
+ lock.default_timeout))
+ op = 'downgrade to'
+ lock.downgrade_write_to_read()
+
+ else: # read -> write
+ # Only get here if the current lock is a read lock, which
+ # must be upgraded to be a write lock
+ tty.debug(msg.format('Upgrading to', desc, pkg_id, timeout))
+ op = 'upgrade to'
+ lock.upgrade_read_to_write(timeout)
+ tty.verbose('{0} is now {1} locked'.format(pkg_id, lock_type))
+
+ except (lk.LockDowngradeError, lk.LockTimeoutError) as exc:
+ tty.debug(err.format(op, desc, pkg_id, exc.__class__.__name__,
+ str(exc)))
+ lock = None
+
+ except (Exception, KeyboardInterrupt, SystemExit) as exc:
+ tty.error(err.format(op, desc, pkg_id, exc.__class__.__name__,
+ str(exc)))
+ self._cleanup_all_tasks()
+ raise
+
+ self.locks[pkg_id] = (lock_type, lock)
+ return self.locks[pkg_id]
+
+ def _init_queue(self, install_deps, install_package):
+ """
+ Initialize the build task priority queue and spec state.
+
+ Args:
+ install_deps (bool): ``True`` if installing package dependencies,
+ otherwise ``False``
+ install_package (bool): ``True`` if installing the package,
+ otherwise ``False``
+ """
+ tty.debug('Initializing the build queue for {0}'.format(self.pkg.name))
+ install_compilers = spack.config.get(
+ 'config:install_missing_compilers', False)
+
+ if install_deps:
+ for dep in self.spec.traverse(order='post', root=False):
+ dep_pkg = dep.package
+
+ # First push any missing compilers (if requested)
+ if install_compilers:
+ self._add_bootstrap_compilers(dep_pkg)
+
+ if package_id(dep_pkg) not in self.build_tasks:
+ self._push_task(dep_pkg, False, 0, 0, STATUS_ADDED)
+
+ # Clear any persistent failure markings _unless_ they are
+ # associated with another process in this parallel build
+ # of the spec.
+ spack.store.db.clear_failure(dep, force=False)
+
+ # Push any missing compilers (if requested) as part of the
+ # package dependencies.
+ if install_compilers:
+ self._add_bootstrap_compilers(self.pkg)
+
+ if install_package and self.pkg_id not in self.build_tasks:
+ # Be sure to clear any previous failure
+ spack.store.db.clear_failure(self.pkg.spec, force=True)
+
+ # Now add the package itself, if appropriate
+ self._push_task(self.pkg, False, 0, 0, STATUS_ADDED)
+
+ def _install_task(self, task, **kwargs):
+ """
+ Perform the installation of the requested spec and/or dependency
+ represented by the build task.
+
+ Args:
+ task (BuildTask): the installation build task for a package"""
+
+ cache_only = kwargs.get('cache_only', False)
+ dirty = kwargs.get('dirty', False)
+ fake = kwargs.get('fake', False)
+ install_source = kwargs.get('install_source', False)
+ keep_stage = kwargs.get('keep_stage', False)
+ skip_patch = kwargs.get('skip_patch', False)
+ tests = kwargs.get('tests', False)
+ use_cache = kwargs.get('use_cache', True)
+ verbose = kwargs.get('verbose', False)
+
+ pkg = task.pkg
+ pkg_id = package_id(pkg)
+ explicit = pkg_id == self.pkg_id
+
+ tty.msg(install_msg(pkg_id, self.pid))
+ task.start = task.start or time.time()
+ task.status = STATUS_INSTALLING
+
+ # Use the binary cache if requested
+ if use_cache and _install_from_cache(pkg, cache_only, explicit):
+ self._update_installed(task)
+ return
+
+ pkg.run_tests = (tests is True or tests and pkg.name in tests)
+
+ pre = '{0}: {1}:'.format(self.pid, pkg.name)
+
+ def build_process():
+ """
+ This function implements the process forked for each build.
+
+ It has its own process and python module space set up by
+ build_environment.fork().
+
+ This function's return value is returned to the parent process.
+ """
+ start_time = time.time()
+ if not fake:
+ if not skip_patch:
+ pkg.do_patch()
+ else:
+ pkg.do_stage()
+
+ pkg_id = package_id(pkg)
+ tty.msg('{0} Building {1} [{2}]'
+ .format(pre, pkg_id, pkg.build_system_class))
+
+ # get verbosity from do_install() parameter or saved value
+ echo = verbose
+ if spack.package.PackageBase._verbose is not None:
+ echo = spack.package.PackageBase._verbose
+
+ pkg.stage.keep = keep_stage
+
+ # parent process already has a prefix write lock
+ with pkg.stage:
+ # Run the pre-install hook in the child process after
+ # the directory is created.
+ spack.hooks.pre_install(pkg.spec)
+ if fake:
+ _do_fake_install(pkg)
+ else:
+ source_path = pkg.stage.source_path
+ if install_source and os.path.isdir(source_path):
+ src_target = os.path.join(pkg.spec.prefix, 'share',
+ pkg.name, 'src')
+ tty.msg('{0} Copying source to {1}'
+ .format(pre, src_target))
+ install_tree(pkg.stage.source_path, src_target)
+
+ # Do the real install in the source directory.
+ with working_dir(pkg.stage.source_path):
+ # Save the build environment in a file before building.
+ dump_environment(pkg.env_path)
+
+ # cache debug settings
+ debug_enabled = tty.is_debug()
+
+ # Spawn a daemon that reads from a pipe and redirects
+ # everything to log_path
+ with log_output(pkg.log_path, echo, True) as logger:
+ for phase_name, phase_attr in zip(
+ pkg.phases, pkg._InstallPhase_phases):
+
+ with logger.force_echo():
+ inner_debug = tty.is_debug()
+ tty.set_debug(debug_enabled)
+ tty.msg("{0} Executing phase: '{1}'"
+ .format(pre, phase_name))
+ tty.set_debug(inner_debug)
+
+ # Redirect stdout and stderr to daemon pipe
+ phase = getattr(pkg, phase_attr)
+ phase(pkg.spec, pkg.prefix)
+
+ echo = logger.echo
+ log(pkg)
+
+ # Run post install hooks before build stage is removed.
+ spack.hooks.post_install(pkg.spec)
+
+ # Stop the timer
+ pkg._total_time = time.time() - start_time
+ build_time = pkg._total_time - pkg._fetch_time
+
+ tty.msg('{0} Successfully installed {1}'
+ .format(pre, pkg_id),
+ 'Fetch: {0}. Build: {1}. Total: {2}.'
+ .format(_hms(pkg._fetch_time), _hms(build_time),
+ _hms(pkg._total_time)))
+ _print_installed_pkg(pkg.prefix)
+
+ # preserve verbosity across runs
+ return echo
+
+ # hook that allows tests to inspect the Package before installation
+ # see unit_test_check() docs.
+ if not pkg.unit_test_check():
+ return
+
+ try:
+ self._setup_install_dir(pkg)
+
+ # Fork a child to do the actual installation.
+ # Preserve verbosity settings across installs.
+ spack.package.PackageBase._verbose = spack.build_environment.fork(
+ pkg, build_process, dirty=dirty, fake=fake)
+
+ # Note: PARENT of the build process adds the new package to
+ # the database, so that we don't need to re-read from file.
+ spack.store.db.add(pkg.spec, spack.store.layout,
+ explicit=explicit)
+
+ # If a compiler, ensure it is added to the configuration
+ if task.compiler:
+ spack.compilers.add_compilers_to_config(
+ spack.compilers.find_compilers([pkg.spec.prefix]))
+
+ except StopIteration as e:
+ # A StopIteration exception means that do_install was asked to
+ # stop early from clients.
+ tty.msg('{0} {1}'.format(self.pid, e.message))
+ tty.msg('Package stage directory : {0}'
+ .format(pkg.stage.source_path))
+
+ _install_task.__doc__ += install_args_docstring
+
+ def _next_is_pri0(self):
+ """
+ Determine if the next build task has priority 0
+
+ Return:
+ True if it does, False otherwise
+ """
+ # Leverage the fact that the first entry in the queue is the next
+ # one that will be processed
+ task = self.build_pq[0][1]
+ return task.priority == 0
+
+ def _pop_task(self):
+ """
+ Remove and return the lowest priority build task.
+
+ Source: Variant of function at docs.python.org/2/library/heapq.html
+ """
+ while self.build_pq:
+ task = heapq.heappop(self.build_pq)[1]
+ if task.status != STATUS_REMOVED:
+ del self.build_tasks[task.pkg_id]
+ task.status = STATUS_DEQUEUED
+ return task
+ return None
+
+ def _push_task(self, pkg, compiler, start, attempts, status):
+ """
+ Create and push (or queue) a build task for the package.
+
+ Source: Customization of "add_task" function at
+ docs.python.org/2/library/heapq.html
+ """
+ msg = "{0} a build task for {1} with status '{2}'"
+ pkg_id = package_id(pkg)
+
+ # Ensure do not (re-)queue installed or failed packages.
+ assert pkg_id not in self.installed
+ assert pkg_id not in self.failed
+
+ # Remove any associated build task since its sequence will change
+ self._remove_task(pkg_id)
+ desc = 'Queueing' if attempts == 0 else 'Requeueing'
+ tty.verbose(msg.format(desc, pkg_id, status))
+
+ # Now add the new task to the queue with a new sequence number to
+ # ensure it is the last entry popped with the same priority. This
+ # is necessary in case we are re-queueing a task whose priority
+ # was decremented due to the installation of one of its dependencies.
+ task = BuildTask(pkg, compiler, start, attempts, status,
+ self.installed)
+ self.build_tasks[pkg_id] = task
+ heapq.heappush(self.build_pq, (task.key, task))
+
+ def _release_lock(self, pkg_id):
+ """
+ Release any lock on the package
+
+ Args:
+ pkg_id (str): identifier for the package whose lock is be released
+ """
+ if pkg_id in self.locks:
+ err = "{0} exception when releasing {1} lock for {2}: {3}"
+ msg = 'Releasing {0} lock on {1}'
+ ltype, lock = self.locks[pkg_id]
+ if lock is not None:
+ try:
+ tty.verbose(msg.format(ltype, pkg_id))
+ if ltype == 'read':
+ lock.release_read()
+ else:
+ lock.release_write()
+ except Exception as exc:
+ tty.warn(err.format(exc.__class__.__name__, ltype,
+ pkg_id, str(exc)))
+
+ def _remove_task(self, pkg_id):
+ """
+ Mark the existing package build task as being removed and return it.
+ Raises KeyError if not found.
+
+ Source: Variant of function at docs.python.org/2/library/heapq.html
+
+ Args:
+ pkg_id (str): identifier for the package to be removed
+ """
+ if pkg_id in self.build_tasks:
+ tty.verbose('Removing build task for {0} from list'
+ .format(pkg_id))
+ task = self.build_tasks.pop(pkg_id)
+ task.status = STATUS_REMOVED
+ return task
+ else:
+ return None
+
+ def _requeue_task(self, task):
+ """
+ Requeues a task that appears to be in progress by another process.
+
+ Args:
+ task (BuildTask): the installation build task for a package
+ """
+ if task.status not in [STATUS_INSTALLED, STATUS_INSTALLING]:
+ tty.msg('{0} {1}'.format(install_msg(task.pkg_id, self.pid),
+ 'in progress by another process'))
+
+ start = task.start or time.time()
+ self._push_task(task.pkg, task.compiler, start, task.attempts,
+ STATUS_INSTALLING)
+
+ def _setup_install_dir(self, pkg):
+ """
+ Create and ensure proper access controls for the install directory.
+
+ Args:
+ pkg (Package): the package to be installed and built
+ """
+ if not os.path.exists(pkg.spec.prefix):
+ tty.verbose('Creating the installation directory {0}'
+ .format(pkg.spec.prefix))
+ spack.store.layout.create_install_directory(pkg.spec)
+ else:
+ # Set the proper group for the prefix
+ group = get_package_group(pkg.spec)
+ if group:
+ chgrp(pkg.spec.prefix, group)
+
+ # Set the proper permissions.
+ # This has to be done after group because changing groups blows
+ # away the sticky group bit on the directory
+ mode = os.stat(pkg.spec.prefix).st_mode
+ perms = get_package_dir_permissions(pkg.spec)
+ if mode != perms:
+ os.chmod(pkg.spec.prefix, perms)
+
+ # Ensure the metadata path exists as well
+ mkdirp(spack.store.layout.metadata_path(pkg.spec), mode=perms)
+
+ def _update_failed(self, task, mark=False, exc=None):
+ """
+ Update the task and transitive dependents as failed; optionally mark
+ externally as failed; and remove associated build tasks.
+
+ Args:
+ task (BuildTask): the build task for the failed package
+ mark (bool): ``True`` if the package and its dependencies are to
+ be marked as "failed", otherwise, ``False``
+ exc (Exception): optional exception if associated with the failure
+ """
+ pkg_id = task.pkg_id
+ err = '' if exc is None else ': {0}'.format(str(exc))
+ tty.debug('Flagging {0} as failed{1}'.format(pkg_id, err))
+ if mark:
+ self.failed[pkg_id] = spack.store.db.mark_failed(task.spec)
+ else:
+ self.failed[pkg_id] = None
+ task.status = STATUS_FAILED
+
+ for dep_id in task.dependents:
+ if dep_id in self.build_tasks:
+ tty.warn('Skipping build of {0} since {1} failed'
+ .format(dep_id, pkg_id))
+ # Ensure the dependent's uninstalled dependents are
+ # up-to-date and their build tasks removed.
+ dep_task = self.build_tasks[dep_id]
+ self._update_failed(dep_task, mark)
+ self._remove_task(dep_id)
+ else:
+ tty.verbose('No build task for {0} to skip since {1} failed'
+ .format(dep_id, pkg_id))
+
+ def _update_installed(self, task):
+ """
+ Mark the task's spec as installed and update the dependencies of its
+ dependents.
+
+ Args:
+ task (BuildTask): the build task for the installed package
+ """
+ pkg_id = task.pkg_id
+ tty.debug('Flagging {0} as installed'.format(pkg_id))
+
+ self.installed.add(pkg_id)
+ task.status = STATUS_INSTALLED
+ for dep_id in task.dependents:
+ tty.debug('Removing {0} from {1}\'s uninstalled dependencies.'
+ .format(pkg_id, dep_id))
+ if dep_id in self.build_tasks:
+ # Ensure the dependent's uninstalled dependencies are
+ # up-to-date. This will require requeueing the task.
+ dep_task = self.build_tasks[dep_id]
+ dep_task.flag_installed(self.installed)
+ self._push_task(dep_task.pkg, dep_task.compiler,
+ dep_task.start, dep_task.attempts,
+ dep_task.status)
+ else:
+ tty.debug('{0} has no build task to update for {1}\'s success'
+ .format(dep_id, pkg_id))
+
+ def install(self, **kwargs):
+ """
+ Install the package and/or associated dependencies.
+
+ Args:"""
+
+ install_deps = kwargs.get('install_deps', True)
+ keep_prefix = kwargs.get('keep_prefix', False)
+ keep_stage = kwargs.get('keep_stage', False)
+ restage = kwargs.get('restage', False)
+
+ # install_package defaults True and is popped so that dependencies are
+ # always installed regardless of whether the root was installed
+ install_package = kwargs.pop('install_package', True)
+
+ # Ensure not attempting to perform an installation when user didn't
+ # want to go that far.
+ self._check_last_phase(**kwargs)
+
+ # Skip out early if the spec is not being installed locally (i.e., if
+ # external or upstream).
+ not_local = _handle_external_and_upstream(self.pkg, True)
+ if not_local:
+ return
+
+ # Initialize the build task queue
+ self._init_queue(install_deps, install_package)
+
+ # Proceed with the installation
+ while self.build_pq:
+ task = self._pop_task()
+ if task is None:
+ continue
+
+ pkg, spec = task.pkg, task.pkg.spec
+ pkg_id = package_id(pkg)
+ tty.verbose('Processing {0}: task={1}'.format(pkg_id, task))
+
+ # Ensure that the current spec has NO uninstalled dependencies,
+ # which is assumed to be reflected directly in its priority.
+ #
+ # If the spec has uninstalled dependencies, then there must be
+ # a bug in the code (e.g., priority queue or uninstalled
+ # dependencies handling). So terminate under the assumption that
+ # all subsequent tasks will have non-zero priorities or may be
+ # dependencies of this task.
+ if task.priority != 0:
+ tty.error('Detected uninstalled dependencies for {0}: {1}'
+ .format(pkg_id, task.uninstalled_deps))
+ dep_str = 'dependencies' if task.priority > 1 else 'dependency'
+ raise InstallError(
+ 'Cannot proceed with {0}: {1} uninstalled {2}: {3}'
+ .format(pkg_id, task.priority, dep_str,
+ ','.join(task.uninstalled_deps)))
+
+ # Skip the installation if the spec is not being installed locally
+ # (i.e., if external or upstream) BUT flag it as installed since
+ # some package likely depends on it.
+ if pkg_id != self.pkg_id:
+ not_local = _handle_external_and_upstream(pkg, False)
+ if not_local:
+ self._update_installed(task)
+ _print_installed_pkg(pkg.prefix)
+ continue
+
+ # Flag a failed spec. Do not need an (install) prefix lock since
+ # assume using a separate (failed) prefix lock file.
+ if pkg_id in self.failed or spack.store.db.prefix_failed(spec):
+ tty.warn('{0} failed to install'.format(pkg_id))
+ self._update_failed(task)
+ continue
+
+ # Attempt to get a write lock. If we can't get the lock then
+ # another process is likely (un)installing the spec or has
+ # determined the spec has already been installed (though the
+ # other process may be hung).
+ ltype, lock = self._ensure_locked('write', pkg)
+ if lock is None:
+ # Attempt to get a read lock instead. If this fails then
+ # another process has a write lock so must be (un)installing
+ # the spec (or that process is hung).
+ ltype, lock = self._ensure_locked('read', pkg)
+
+ # Requeue the spec if we cannot get at least a read lock so we
+ # can check the status presumably established by another process
+ # -- failed, installed, or uninstalled -- on the next pass.
+ if lock is None:
+ self._requeue_task(task)
+ continue
+
+ # Determine state of installation artifacts and adjust accordingly.
+ self._prepare_for_install(task, keep_prefix, keep_stage,
+ restage)
+
+ # Flag an already installed package
+ if pkg_id in self.installed:
+ # Downgrade to a read lock to preclude other processes from
+ # uninstalling the package until we're done installing its
+ # dependents.
+ ltype, lock = self._ensure_locked('read', pkg)
+ if lock is not None:
+ self._update_installed(task)
+ _print_installed_pkg(pkg.prefix)
+ else:
+ # At this point we've failed to get a write or a read
+ # lock, which means another process has taken a write
+ # lock between our releasing the write and acquiring the
+ # read.
+ #
+ # Requeue the task so we can re-check the status
+ # established by the other process -- failed, installed,
+ # or uninstalled -- on the next pass.
+ self.installed.remove(pkg_id)
+ self._requeue_task(task)
+ continue
+
+ # Having a read lock on an uninstalled pkg may mean another
+ # process completed an uninstall of the software between the
+ # time we failed to acquire the write lock and the time we
+ # took the read lock.
+ #
+ # Requeue the task so we can check the status presumably
+ # established by the other process -- failed, installed, or
+ # uninstalled -- on the next pass.
+ if ltype == 'read':
+ self._requeue_task(task)
+ continue
+
+ # Proceed with the installation since we have an exclusive write
+ # lock on the package.
+ try:
+ self._install_task(task, **kwargs)
+ self._update_installed(task)
+
+ # If we installed then we should keep the prefix
+ last_phase = getattr(pkg, 'last_phase', None)
+ keep_prefix = last_phase is None or keep_prefix
+
+ except spack.directory_layout.InstallDirectoryAlreadyExistsError:
+ tty.debug("Keeping existing install prefix in place.")
+ self._update_installed(task)
+ raise
+
+ except (Exception, KeyboardInterrupt, SystemExit) as exc:
+ # Assuming best effort installs so suppress the exception and
+ # mark as a failure UNLESS this is the explicit package.
+ err = 'Failed to install {0} due to {1}: {2}'
+ tty.error(err.format(pkg.name, exc.__class__.__name__,
+ str(exc)))
+ self._update_failed(task, True, exc)
+
+ if pkg_id == self.pkg_id:
+ raise
+
+ finally:
+ # Remove the install prefix if anything went wrong during
+ # install.
+ if not keep_prefix:
+ pkg.remove_prefix()
+
+ # The subprocess *may* have removed the build stage. Mark it
+ # not created so that the next time pkg.stage is invoked, we
+ # check the filesystem for it.
+ pkg.stage.created = False
+
+ # Perform basic task cleanup for the installed spec to
+ # include downgrading the write to a read lock
+ self._cleanup_task(pkg)
+
+ # Cleanup, which includes releasing all of the read locks
+ self._cleanup_all_tasks()
+
+ # Ensure we properly report if the original/explicit pkg is failed
+ if self.pkg_id in self.failed:
+ msg = ('Installation of {0} failed. Review log for details'
+ .format(self.pkg_id))
+ raise InstallError(msg)
+
+ install.__doc__ += install_args_docstring
+
+ # Helper method to "smooth" the transition from the
+ # spack.package.PackageBase class
+ @property
+ def spec(self):
+ """The specification associated with the package."""
+ return self.pkg.spec
+
+
+class BuildTask(object):
+ """Class for representing the build task for a package."""
+
+ def __init__(self, pkg, compiler, start, attempts, status, installed):
+ """
+ Instantiate a build task for a package.
+
+ Args:
+ pkg (Package): the package to be installed and built
+ compiler (bool): ``True`` if the task is for a bootstrap compiler,
+ otherwise, ``False``
+ start (int): the initial start time for the package, in seconds
+ attempts (int): the number of attempts to install the package
+ status (str): the installation status
+ installed (list of str): the identifiers of packages that have
+ been installed so far
+ """
+
+ # Ensure dealing with a package that has a concrete spec
+ if not isinstance(pkg, spack.package.PackageBase):
+ raise ValueError("{0} must be a package".format(str(pkg)))
+
+ self.pkg = pkg
+ if not self.pkg.spec.concrete:
+ raise ValueError("{0} must have a concrete spec"
+ .format(self.pkg.name))
+
+ # The "unique" identifier for the task's package
+ self.pkg_id = package_id(self.pkg)
+
+ # Initialize the status to an active state. The status is used to
+ # ensure priority queue invariants when tasks are "removed" from the
+ # queue.
+ if status == STATUS_REMOVED:
+ msg = "Cannot create a build task for {0} with status '{1}'"
+ raise InstallError(msg.format(self.pkg_id, status))
+
+ self.status = status
+
+ # Package is associated with a bootstrap compiler
+ self.compiler = compiler
+
+ # The initial start time for processing the spec
+ self.start = start
+
+ # Number of times the task has been queued
+ self.attempts = attempts + 1
+
+ # Set of dependents
+ self.dependents = set(package_id(d.package) for d
+ in self.spec.dependents())
+
+ # Set of dependencies
+ #
+ # Be consistent wrt use of dependents and dependencies. That is,
+ # if use traverse for transitive dependencies, then must remove
+ # transitive dependents on failure.
+ self.dependencies = set(package_id(d.package) for d in
+ self.spec.dependencies() if
+ package_id(d.package) != self.pkg_id)
+
+ # List of uninstalled dependencies, which is used to establish
+ # the priority of the build task.
+ #
+ self.uninstalled_deps = set(pkg_id for pkg_id in self.dependencies if
+ pkg_id not in installed)
+
+ # Ensure the task gets a unique sequence number to preserve the
+ # order in which it was added.
+ self.sequence = next(_counter)
+
+ def __repr__(self):
+ """Returns a formal representation of the build task."""
+ rep = '{0}('.format(self.__class__.__name__)
+ for attr, value in self.__dict__.items():
+ rep += '{0}={1}, '.format(attr, value.__repr__())
+ return '{0})'.format(rep.strip(', '))
+
+ def __str__(self):
+ """Returns a printable version of the build task."""
+ dependencies = '#dependencies={0}'.format(len(self.dependencies))
+ return ('priority={0}, status={1}, start={2}, {3}'
+ .format(self.priority, self.status, self.start, dependencies))
+
+ def flag_installed(self, installed):
+ """
+ Ensure the dependency is not considered to still be uninstalled.
+
+ Args:
+ installed (list of str): the identifiers of packages that have
+ been installed so far
+ """
+ now_installed = self.uninstalled_deps & set(installed)
+ for pkg_id in now_installed:
+ self.uninstalled_deps.remove(pkg_id)
+ tty.debug('{0}: Removed {1} from uninstalled deps list: {2}'
+ .format(self.pkg_id, pkg_id, self.uninstalled_deps))
+
+ @property
+ def key(self):
+ """The key is the tuple (# uninstalled dependencies, sequence)."""
+ return (self.priority, self.sequence)
+
+ @property
+ def priority(self):
+ """The priority is based on the remaining uninstalled dependencies."""
+ return len(self.uninstalled_deps)
+
+ @property
+ def spec(self):
+ """The specification associated with the package."""
+ return self.pkg.spec
+
+
+class InstallError(spack.error.SpackError):
+ """Raised when something goes wrong during install or uninstall."""
+
+ def __init__(self, message, long_msg=None):
+ super(InstallError, self).__init__(message, long_msg)
+
+
+class ExternalPackageError(InstallError):
+ """Raised by install() when a package is only for external use."""
+
+
+class InstallLockError(InstallError):
+ """Raised during install when something goes wrong with package locking."""
+
+
+class UpstreamPackageError(InstallError):
+ """Raised during install when something goes wrong with an upstream
+ package."""
diff --git a/lib/spack/spack/main.py b/lib/spack/spack/main.py
index 0821b4e699..37345e8bc2 100644
--- a/lib/spack/spack/main.py
+++ b/lib/spack/spack/main.py
@@ -550,9 +550,11 @@ class SpackCommand(object):
tty.debug(e)
self.error = e
if fail_on_error:
+ self._log_command_output(out)
raise
if fail_on_error and self.returncode not in (None, 0):
+ self._log_command_output(out)
raise SpackCommandError(
"Command exited with code %d: %s(%s)" % (
self.returncode, self.command_name,
@@ -560,6 +562,13 @@ class SpackCommand(object):
return out.getvalue()
+ def _log_command_output(self, out):
+ if tty.is_verbose():
+ fmt = self.command_name + ': {0}'
+ for ln in out.getvalue().split('\n'):
+ if len(ln) > 0:
+ tty.verbose(fmt.format(ln.replace('==> ', '')))
+
def _profile_wrapper(command, parser, args, unknown_args):
import cProfile
diff --git a/lib/spack/spack/package.py b/lib/spack/spack/package.py
index d67c017a70..d123f58cdf 100644
--- a/lib/spack/spack/package.py
+++ b/lib/spack/spack/package.py
@@ -14,7 +14,6 @@ import base64
import contextlib
import copy
import functools
-import glob
import hashlib
import inspect
import os
@@ -48,21 +47,16 @@ import spack.url
import spack.util.environment
import spack.util.web
import spack.multimethod
-import spack.binary_distribution as binary_distribution
-from llnl.util.filesystem import mkdirp, touch, chgrp
-from llnl.util.filesystem import working_dir, install_tree, install
+from llnl.util.filesystem import mkdirp, touch, working_dir
from llnl.util.lang import memoized
from llnl.util.link_tree import LinkTree
-from llnl.util.tty.log import log_output
-from llnl.util.tty.color import colorize
from spack.filesystem_view import YamlFilesystemView
-from spack.util.executable import which
+from spack.installer import \
+ install_args_docstring, PackageInstaller, InstallError
from spack.stage import stage_prefix, Stage, ResourceStage, StageComposite
-from spack.util.environment import dump_environment
from spack.util.package_hash import package_hash
from spack.version import Version
-from spack.package_prefs import get_package_dir_permissions, get_package_group
"""Allowed URL schemes for spack packages."""
_ALLOWED_URL_SCHEMES = ["http", "https", "ftp", "file", "git"]
@@ -430,10 +424,18 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
# These are default values for instance variables.
#
+ #: A list or set of build time test functions to be called when tests
+ #: are executed or 'None' if there are no such test functions.
+ build_time_test_callbacks = None
+
#: Most Spack packages are used to install source or binary code while
#: those that do not can be used to install a set of other Spack packages.
has_code = True
+ #: A list or set of install time test functions to be called when tests
+ #: are executed or 'None' if there are no such test functions.
+ install_time_test_callbacks = None
+
#: By default we build in parallel. Subclasses can override this.
parallel = True
@@ -1283,41 +1285,6 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
hashlib.sha256(bytes().join(
sorted(hash_content))).digest()).lower()
- def do_fake_install(self):
- """Make a fake install directory containing fake executables,
- headers, and libraries."""
-
- command = self.name
- header = self.name
- library = self.name
-
- # Avoid double 'lib' for packages whose names already start with lib
- if not self.name.startswith('lib'):
- library = 'lib' + library
-
- dso_suffix = '.dylib' if sys.platform == 'darwin' else '.so'
- chmod = which('chmod')
-
- # Install fake command
- mkdirp(self.prefix.bin)
- touch(os.path.join(self.prefix.bin, command))
- chmod('+x', os.path.join(self.prefix.bin, command))
-
- # Install fake header file
- mkdirp(self.prefix.include)
- touch(os.path.join(self.prefix.include, header + '.h'))
-
- # Install fake shared and static libraries
- mkdirp(self.prefix.lib)
- for suffix in [dso_suffix, '.a']:
- touch(os.path.join(self.prefix.lib, library + suffix))
-
- # Install fake man page
- mkdirp(self.prefix.man.man1)
-
- packages_dir = spack.store.layout.build_packages_path(self.spec)
- dump_packages(self.spec, packages_dir)
-
def _has_make_target(self, target):
"""Checks to see if 'target' is a valid target in a Makefile.
@@ -1461,382 +1428,17 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
with spack.store.db.prefix_write_lock(self.spec):
yield
- def _process_external_package(self, explicit):
- """Helper function to process external packages.
-
- Runs post install hooks and registers the package in the DB.
-
- Args:
- explicit (bool): if the package was requested explicitly by
- the user, False if it was pulled in as a dependency of an
- explicit package.
- """
- if self.spec.external_module:
- message = '{s.name}@{s.version} : has external module in {module}'
- tty.msg(message.format(s=self, module=self.spec.external_module))
- message = '{s.name}@{s.version} : is actually installed in {path}'
- tty.msg(message.format(s=self, path=self.spec.external_path))
- else:
- message = '{s.name}@{s.version} : externally installed in {path}'
- tty.msg(message.format(s=self, path=self.spec.external_path))
- try:
- # Check if the package was already registered in the DB
- # If this is the case, then just exit
- rec = spack.store.db.get_record(self.spec)
- message = '{s.name}@{s.version} : already registered in DB'
- tty.msg(message.format(s=self))
- # Update the value of rec.explicit if it is necessary
- self._update_explicit_entry_in_db(rec, explicit)
-
- except KeyError:
- # If not register it and generate the module file
- # For external packages we just need to run
- # post-install hooks to generate module files
- message = '{s.name}@{s.version} : generating module file'
- tty.msg(message.format(s=self))
- spack.hooks.post_install(self.spec)
- # Add to the DB
- message = '{s.name}@{s.version} : registering into DB'
- tty.msg(message.format(s=self))
- spack.store.db.add(self.spec, None, explicit=explicit)
-
- def _update_explicit_entry_in_db(self, rec, explicit):
- if explicit and not rec.explicit:
- with spack.store.db.write_transaction():
- rec = spack.store.db.get_record(self.spec)
- rec.explicit = True
- message = '{s.name}@{s.version} : marking the package explicit'
- tty.msg(message.format(s=self))
-
- def try_install_from_binary_cache(self, explicit, unsigned=False):
- tty.msg('Searching for binary cache of %s' % self.name)
- specs = binary_distribution.get_spec(spec=self.spec,
- force=False)
- binary_spec = spack.spec.Spec.from_dict(self.spec.to_dict())
- binary_spec._mark_concrete()
- if binary_spec not in specs:
- return False
- tarball = binary_distribution.download_tarball(binary_spec)
- # see #10063 : install from source if tarball doesn't exist
- if tarball is None:
- tty.msg('%s exist in binary cache but with different hash' %
- self.name)
- return False
- tty.msg('Installing %s from binary cache' % self.name)
- binary_distribution.extract_tarball(
- binary_spec, tarball, allow_root=False,
- unsigned=unsigned, force=False)
- self.installed_from_binary_cache = True
- spack.store.db.add(
- self.spec, spack.store.layout, explicit=explicit)
- return True
-
- def bootstrap_compiler(self, **kwargs):
- """Called by do_install to setup ensure Spack has the right compiler.
-
- Checks Spack's compiler configuration for a compiler that
- matches the package spec. If none are configured, installs and
- adds to the compiler configuration the compiler matching the
- CompilerSpec object."""
- compilers = spack.compilers.compilers_for_spec(
- self.spec.compiler,
- arch_spec=self.spec.architecture
- )
- if not compilers:
- dep = spack.compilers.pkg_spec_for_compiler(self.spec.compiler)
- dep.architecture = self.spec.architecture
- # concrete CompilerSpec has less info than concrete Spec
- # concretize as Spec to add that information
- dep.concretize()
- dep.package.do_install(**kwargs)
- spack.compilers.add_compilers_to_config(
- spack.compilers.find_compilers([dep.prefix])
- )
-
def do_install(self, **kwargs):
- """Called by commands to install a package and its dependencies.
+ """Called by commands to install a package and or its dependencies.
Package implementations should override install() to describe
their build process.
- Args:
- keep_prefix (bool): Keep install prefix on failure. By default,
- destroys it.
- keep_stage (bool): By default, stage is destroyed only if there
- are no exceptions during build. Set to True to keep the stage
- even with exceptions.
- install_source (bool): By default, source is not installed, but
- for debugging it might be useful to keep it around.
- install_deps (bool): Install dependencies before installing this
- package
- skip_patch (bool): Skip patch stage of build if True.
- verbose (bool): Display verbose build output (by default,
- suppresses it)
- fake (bool): Don't really build; install fake stub files instead.
- explicit (bool): True if package was explicitly installed, False
- if package was implicitly installed (as a dependency).
- tests (bool or list or set): False to run no tests, True to test
- all packages, or a list of package names to run tests for some
- dirty (bool): Don't clean the build environment before installing.
- restage (bool): Force spack to restage the package source.
- force (bool): Install again, even if already installed.
- use_cache (bool): Install from binary package, if available.
- cache_only (bool): Fail if binary package unavailable.
- stop_at (InstallPhase): last installation phase to be executed
- (or None)
- """
- if not self.spec.concrete:
- raise ValueError("Can only install concrete packages: %s."
- % self.spec.name)
-
- keep_prefix = kwargs.get('keep_prefix', False)
- keep_stage = kwargs.get('keep_stage', False)
- install_source = kwargs.get('install_source', False)
- install_deps = kwargs.get('install_deps', True)
- skip_patch = kwargs.get('skip_patch', False)
- verbose = kwargs.get('verbose', False)
- fake = kwargs.get('fake', False)
- explicit = kwargs.get('explicit', False)
- tests = kwargs.get('tests', False)
- dirty = kwargs.get('dirty', False)
- restage = kwargs.get('restage', False)
-
- # install_self defaults True and is popped so that dependencies are
- # always installed regardless of whether the root was installed
- install_self = kwargs.pop('install_package', True)
- # explicit defaults False so that dependents are implicit regardless
- # of whether their dependents are implicitly or explicitly installed.
- # Spack ensures root packages of install commands are always marked to
- # install explicit
- explicit = kwargs.pop('explicit', False)
-
- # For external packages the workflow is simplified, and basically
- # consists in module file generation and registration in the DB
- if self.spec.external:
- return self._process_external_package(explicit)
-
- if self.installed_upstream:
- tty.msg("{0.name} is installed in an upstream Spack instance"
- " at {0.prefix}".format(self))
- # Note this skips all post-install hooks. In the case of modules
- # this is considered correct because we want to retrieve the
- # module from the upstream Spack instance.
- return
-
- partial = self.check_for_unfinished_installation(keep_prefix, restage)
-
- # Ensure package is not already installed
- layout = spack.store.layout
- with spack.store.db.prefix_read_lock(self.spec):
- if partial:
- tty.msg(
- "Continuing from partial install of %s" % self.name)
- elif layout.check_installed(self.spec):
- msg = '{0.name} is already installed in {0.prefix}'
- tty.msg(msg.format(self))
- rec = spack.store.db.get_record(self.spec)
- # In case the stage directory has already been created,
- # this ensures it's removed after we checked that the spec
- # is installed
- if keep_stage is False:
- self.stage.destroy()
- return self._update_explicit_entry_in_db(rec, explicit)
-
- self._do_install_pop_kwargs(kwargs)
-
- # First, install dependencies recursively.
- if install_deps:
- tty.debug('Installing {0} dependencies'.format(self.name))
- dep_kwargs = kwargs.copy()
- dep_kwargs['explicit'] = False
- dep_kwargs['install_deps'] = False
- for dep in self.spec.traverse(order='post', root=False):
- if spack.config.get('config:install_missing_compilers', False):
- Package._install_bootstrap_compiler(dep.package, **kwargs)
- dep.package.do_install(**dep_kwargs)
-
- # Then install the compiler if it is not already installed.
- if install_deps:
- Package._install_bootstrap_compiler(self, **kwargs)
-
- if not install_self:
- return
-
- # Then, install the package proper
- tty.msg(colorize('@*{Installing} @*g{%s}' % self.name))
-
- if kwargs.get('use_cache', True):
- if self.try_install_from_binary_cache(
- explicit, unsigned=kwargs.get('unsigned', False)):
- tty.msg('Successfully installed %s from binary cache'
- % self.name)
- print_pkg(self.prefix)
- spack.hooks.post_install(self.spec)
- return
- elif kwargs.get('cache_only', False):
- tty.die('No binary for %s found and cache-only specified'
- % self.name)
-
- tty.msg('No binary for %s found: installing from source'
- % self.name)
+ Args:"""
+ builder = PackageInstaller(self)
+ builder.install(**kwargs)
- # Set run_tests flag before starting build
- self.run_tests = (tests is True or
- tests and self.name in tests)
-
- # Then install the package itself.
- def build_process():
- """This implements the process forked for each build.
-
- Has its own process and python module space set up by
- build_environment.fork().
-
- This function's return value is returned to the parent process.
- """
-
- start_time = time.time()
- if not fake:
- if not skip_patch:
- self.do_patch()
- else:
- self.do_stage()
-
- tty.msg(
- 'Building {0} [{1}]'.format(self.name, self.build_system_class)
- )
-
- # get verbosity from do_install() parameter or saved value
- echo = verbose
- if PackageBase._verbose is not None:
- echo = PackageBase._verbose
-
- self.stage.keep = keep_stage
- with self._stage_and_write_lock():
- # Run the pre-install hook in the child process after
- # the directory is created.
- spack.hooks.pre_install(self.spec)
- if fake:
- self.do_fake_install()
- else:
- source_path = self.stage.source_path
- if install_source and os.path.isdir(source_path):
- src_target = os.path.join(
- self.spec.prefix, 'share', self.name, 'src')
- tty.msg('Copying source to {0}'.format(src_target))
- install_tree(self.stage.source_path, src_target)
-
- # Do the real install in the source directory.
- with working_dir(self.stage.source_path):
- # Save the build environment in a file before building.
- dump_environment(self.env_path)
-
- # cache debug settings
- debug_enabled = tty.is_debug()
-
- # Spawn a daemon that reads from a pipe and redirects
- # everything to log_path
- with log_output(self.log_path, echo, True) as logger:
- for phase_name, phase_attr in zip(
- self.phases, self._InstallPhase_phases):
-
- with logger.force_echo():
- inner_debug = tty.is_debug()
- tty.set_debug(debug_enabled)
- tty.msg(
- "Executing phase: '%s'" % phase_name)
- tty.set_debug(inner_debug)
-
- # Redirect stdout and stderr to daemon pipe
- phase = getattr(self, phase_attr)
- phase(self.spec, self.prefix)
-
- echo = logger.echo
- self.log()
-
- # Run post install hooks before build stage is removed.
- spack.hooks.post_install(self.spec)
-
- # Stop timer.
- self._total_time = time.time() - start_time
- build_time = self._total_time - self._fetch_time
-
- tty.msg("Successfully installed %s" % self.name,
- "Fetch: %s. Build: %s. Total: %s." %
- (_hms(self._fetch_time), _hms(build_time),
- _hms(self._total_time)))
- print_pkg(self.prefix)
-
- # preserve verbosity across runs
- return echo
-
- # hook that allow tests to inspect this Package before installation
- # see unit_test_check() docs.
- if not self.unit_test_check():
- return
-
- try:
- # Create the install prefix and fork the build process.
- if not os.path.exists(self.prefix):
- spack.store.layout.create_install_directory(self.spec)
- else:
- # Set the proper group for the prefix
- group = get_package_group(self.spec)
- if group:
- chgrp(self.prefix, group)
- # Set the proper permissions.
- # This has to be done after group because changing groups blows
- # away the sticky group bit on the directory
- mode = os.stat(self.prefix).st_mode
- perms = get_package_dir_permissions(self.spec)
- if mode != perms:
- os.chmod(self.prefix, perms)
-
- # Ensure the metadata path exists as well
- mkdirp(spack.store.layout.metadata_path(self.spec), mode=perms)
-
- # Fork a child to do the actual installation.
- # Preserve verbosity settings across installs.
- PackageBase._verbose = spack.build_environment.fork(
- self, build_process, dirty=dirty, fake=fake)
-
- # If we installed then we should keep the prefix
- keep_prefix = self.last_phase is None or keep_prefix
- # note: PARENT of the build process adds the new package to
- # the database, so that we don't need to re-read from file.
- spack.store.db.add(
- self.spec, spack.store.layout, explicit=explicit
- )
- except spack.directory_layout.InstallDirectoryAlreadyExistsError:
- # Abort install if install directory exists.
- # But do NOT remove it (you'd be overwriting someone else's stuff)
- tty.warn("Keeping existing install prefix in place.")
- raise
- except StopIteration as e:
- # A StopIteration exception means that do_install
- # was asked to stop early from clients
- tty.msg(e.message)
- tty.msg(
- 'Package stage directory : {0}'.format(self.stage.source_path)
- )
- finally:
- # Remove the install prefix if anything went wrong during install.
- if not keep_prefix:
- self.remove_prefix()
-
- # The subprocess *may* have removed the build stage. Mark it
- # not created so that the next time self.stage is invoked, we
- # check the filesystem for it.
- self.stage.created = False
-
- @staticmethod
- def _install_bootstrap_compiler(pkg, **install_kwargs):
- tty.debug('Bootstrapping {0} compiler for {1}'.format(
- pkg.spec.compiler, pkg.name
- ))
- comp_kwargs = install_kwargs.copy()
- comp_kwargs['explicit'] = False
- comp_kwargs['install_deps'] = True
- pkg.bootstrap_compiler(**comp_kwargs)
+ do_install.__doc__ += install_args_docstring
def unit_test_check(self):
"""Hook for unit tests to assert things about package internals.
@@ -1855,125 +1457,6 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
"""
return True
- def check_for_unfinished_installation(
- self, keep_prefix=False, restage=False):
- """Check for leftover files from partially-completed prior install to
- prepare for a new install attempt.
-
- Options control whether these files are reused (vs. destroyed).
-
- Args:
- keep_prefix (bool): True if the installation prefix needs to be
- kept, False otherwise
- restage (bool): False if the stage has to be kept, True otherwise
-
- Returns:
- True if the prefix exists but the install is not complete, False
- otherwise.
- """
- if self.spec.external:
- raise ExternalPackageError("Attempted to repair external spec %s" %
- self.spec.name)
-
- with spack.store.db.prefix_write_lock(self.spec):
- try:
- record = spack.store.db.get_record(self.spec)
- installed_in_db = record.installed if record else False
- except KeyError:
- installed_in_db = False
-
- partial = False
- if not installed_in_db and os.path.isdir(self.prefix):
- if not keep_prefix:
- self.remove_prefix()
- else:
- partial = True
-
- if restage and self.stage.managed_by_spack:
- self.stage.destroy()
-
- return partial
-
- def _do_install_pop_kwargs(self, kwargs):
- """Pops kwargs from do_install before starting the installation
-
- Args:
- kwargs:
- 'stop_at': last installation phase to be executed (or None)
-
- """
- self.last_phase = kwargs.pop('stop_at', None)
- if self.last_phase is not None and self.last_phase not in self.phases:
- tty.die('\'{0}\' is not an allowed phase for package {1}'
- .format(self.last_phase, self.name))
-
- def log(self):
- """Copy provenance into the install directory on success."""
- packages_dir = spack.store.layout.build_packages_path(self.spec)
-
- # Remove first if we're overwriting another build
- # (can happen with spack setup)
- try:
- # log and env install paths are inside this
- shutil.rmtree(packages_dir)
- except Exception as e:
- # FIXME : this potentially catches too many things...
- tty.debug(e)
-
- # Archive the whole stdout + stderr for the package
- install(self.log_path, self.install_log_path)
-
- # Archive the environment used for the build
- install(self.env_path, self.install_env_path)
-
- # Finally, archive files that are specific to each package
- with working_dir(self.stage.path):
- errors = StringIO()
- target_dir = os.path.join(
- spack.store.layout.metadata_path(self.spec),
- 'archived-files')
-
- for glob_expr in self.archive_files:
- # Check that we are trying to copy things that are
- # in the stage tree (not arbitrary files)
- abs_expr = os.path.realpath(glob_expr)
- if os.path.realpath(self.stage.path) not in abs_expr:
- errors.write(
- '[OUTSIDE SOURCE PATH]: {0}\n'.format(glob_expr)
- )
- continue
- # Now that we are sure that the path is within the correct
- # folder, make it relative and check for matches
- if os.path.isabs(glob_expr):
- glob_expr = os.path.relpath(
- glob_expr, self.stage.path
- )
- files = glob.glob(glob_expr)
- for f in files:
- try:
- target = os.path.join(target_dir, f)
- # We must ensure that the directory exists before
- # copying a file in
- mkdirp(os.path.dirname(target))
- install(f, target)
- except Exception as e:
- tty.debug(e)
-
- # Here try to be conservative, and avoid discarding
- # the whole install procedure because of copying a
- # single file failed
- errors.write('[FAILED TO ARCHIVE]: {0}'.format(f))
-
- if errors.getvalue():
- error_file = os.path.join(target_dir, 'errors.txt')
- mkdirp(target_dir)
- with open(error_file, 'w') as err:
- err.write(errors.getvalue())
- tty.warn('Errors occurred when archiving files.\n\t'
- 'See: {0}'.format(error_file))
-
- dump_packages(self.spec, packages_dir)
-
def sanity_check_prefix(self):
"""This function checks whether install succeeded."""
@@ -2539,8 +2022,6 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
"""
return " ".join("-Wl,-rpath,%s" % p for p in self.rpath)
- build_time_test_callbacks = None
-
@on_package_attributes(run_tests=True)
def _run_default_build_time_test_callbacks(self):
"""Tries to call all the methods that are listed in the attribute
@@ -2560,8 +2041,6 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
msg = 'RUN-TESTS: method not implemented [{0}]'
tty.warn(msg.format(name))
- install_time_test_callbacks = None
-
@on_package_attributes(run_tests=True)
def _run_default_install_time_test_callbacks(self):
"""Tries to call all the methods that are listed in the attribute
@@ -2652,54 +2131,6 @@ def flatten_dependencies(spec, flat_dir):
dep_files.merge(flat_dir + '/' + name)
-def dump_packages(spec, path):
- """Dump all package information for a spec and its dependencies.
-
- This creates a package repository within path for every
- namespace in the spec DAG, and fills the repos wtih package
- files and patch files for every node in the DAG.
- """
- mkdirp(path)
-
- # Copy in package.py files from any dependencies.
- # Note that we copy them in as they are in the *install* directory
- # NOT as they are in the repository, because we want a snapshot of
- # how *this* particular build was done.
- for node in spec.traverse(deptype=all):
- if node is not spec:
- # Locate the dependency package in the install tree and find
- # its provenance information.
- source = spack.store.layout.build_packages_path(node)
- source_repo_root = os.path.join(source, node.namespace)
-
- # There's no provenance installed for the source package. Skip it.
- # User can always get something current from the builtin repo.
- if not os.path.isdir(source_repo_root):
- continue
-
- # Create a source repo and get the pkg directory out of it.
- try:
- source_repo = spack.repo.Repo(source_repo_root)
- source_pkg_dir = source_repo.dirname_for_package_name(
- node.name)
- except spack.repo.RepoError:
- tty.warn("Warning: Couldn't copy in provenance for %s" %
- node.name)
-
- # Create a destination repository
- dest_repo_root = os.path.join(path, node.namespace)
- if not os.path.exists(dest_repo_root):
- spack.repo.create_repo(dest_repo_root)
- repo = spack.repo.Repo(dest_repo_root)
-
- # Get the location of the package in the dest repo.
- dest_pkg_dir = repo.dirname_for_package_name(node.name)
- if node is not spec:
- install_tree(source_pkg_dir, dest_pkg_dir)
- else:
- spack.repo.path.dump_provenance(node, dest_pkg_dir)
-
-
def possible_dependencies(*pkg_or_spec, **kwargs):
"""Get the possible dependencies of a number of packages.
@@ -2729,28 +2160,6 @@ def possible_dependencies(*pkg_or_spec, **kwargs):
return visited
-def print_pkg(message):
- """Outputs a message with a package icon."""
- from llnl.util.tty.color import cwrite
- cwrite('@*g{[+]} ')
- print(message)
-
-
-def _hms(seconds):
- """Convert time in seconds to hours, minutes, seconds."""
- m, s = divmod(seconds, 60)
- h, m = divmod(m, 60)
-
- parts = []
- if h:
- parts.append("%dh" % h)
- if m:
- parts.append("%dm" % m)
- if s:
- parts.append("%.2fs" % s)
- return ' '.join(parts)
-
-
class FetchError(spack.error.SpackError):
"""Raised when something goes wrong during fetch."""
@@ -2758,17 +2167,6 @@ class FetchError(spack.error.SpackError):
super(FetchError, self).__init__(message, long_msg)
-class InstallError(spack.error.SpackError):
- """Raised when something goes wrong during install or uninstall."""
-
- def __init__(self, message, long_msg=None):
- super(InstallError, self).__init__(message, long_msg)
-
-
-class ExternalPackageError(InstallError):
- """Raised by install() when a package is only for external use."""
-
-
class PackageStillNeededError(InstallError):
"""Raised when package is still needed by another on uninstall."""
def __init__(self, spec, dependents):
diff --git a/lib/spack/spack/pkgkit.py b/lib/spack/spack/pkgkit.py
index a39ede3ab7..c304fb4fca 100644
--- a/lib/spack/spack/pkgkit.py
+++ b/lib/spack/spack/pkgkit.py
@@ -50,7 +50,10 @@ from spack.util.executable import *
from spack.package import \
install_dependency_symlinks, flatten_dependencies, \
- DependencyConflictError, InstallError, ExternalPackageError
+ DependencyConflictError
+
+from spack.installer import \
+ ExternalPackageError, InstallError, InstallLockError, UpstreamPackageError
from spack.variant import any_combination_of, auto_or_any_combination_of
from spack.variant import disjoint_sets
diff --git a/lib/spack/spack/report.py b/lib/spack/spack/report.py
index 04a20f36b8..e500f9fc05 100644
--- a/lib/spack/spack/report.py
+++ b/lib/spack/spack/report.py
@@ -44,7 +44,8 @@ def fetch_package_log(pkg):
class InfoCollector(object):
- """Decorates PackageBase.do_install to collect information
+ """Decorates PackageInstaller._install_task, which is called by
+ PackageBase.do_install for each spec, to collect information
on the installation of certain specs.
When exiting the context this change will be rolled-back.
@@ -57,8 +58,8 @@ class InfoCollector(object):
specs (list of Spec): specs whose install information will
be recorded
"""
- #: Backup of PackageBase.do_install
- _backup_do_install = spack.package.PackageBase.do_install
+ #: Backup of PackageInstaller._install_task
+ _backup__install_task = spack.package.PackageInstaller._install_task
def __init__(self, specs):
#: Specs that will be installed
@@ -108,15 +109,16 @@ class InfoCollector(object):
}
spec['packages'].append(package)
- def gather_info(do_install):
- """Decorates do_install to gather useful information for
- a CI report.
+ def gather_info(_install_task):
+ """Decorates PackageInstaller._install_task to gather useful
+ information on PackageBase.do_install for a CI report.
It's defined here to capture the environment and build
this context as the installations proceed.
"""
- @functools.wraps(do_install)
- def wrapper(pkg, *args, **kwargs):
+ @functools.wraps(_install_task)
+ def wrapper(installer, task, *args, **kwargs):
+ pkg = task.pkg
# We accounted before for what is already installed
installed_on_entry = pkg.installed
@@ -134,7 +136,7 @@ class InfoCollector(object):
value = None
try:
- value = do_install(pkg, *args, **kwargs)
+ value = _install_task(installer, task, *args, **kwargs)
package['result'] = 'success'
package['stdout'] = fetch_package_log(pkg)
package['installed_from_binary_cache'] = \
@@ -182,14 +184,15 @@ class InfoCollector(object):
return wrapper
- spack.package.PackageBase.do_install = gather_info(
- spack.package.PackageBase.do_install
+ spack.package.PackageInstaller._install_task = gather_info(
+ spack.package.PackageInstaller._install_task
)
def __exit__(self, exc_type, exc_val, exc_tb):
- # Restore the original method in PackageBase
- spack.package.PackageBase.do_install = InfoCollector._backup_do_install
+ # Restore the original method in PackageInstaller
+ spack.package.PackageInstaller._install_task = \
+ InfoCollector._backup__install_task
for spec in self.specs:
spec['npackages'] = len(spec['packages'])
@@ -208,9 +211,9 @@ class collect_info(object):
"""Collects information to build a report while installing
and dumps it on exit.
- If the format name is not ``None``, this context manager
- decorates PackageBase.do_install when entering the context
- and unrolls the change when exiting.
+ If the format name is not ``None``, this context manager decorates
+ PackageInstaller._install_task when entering the context for a
+ PackageBase.do_install operation and unrolls the change when exiting.
Within the context, only the specs that are passed to it
on initialization will be recorded for the report. Data from
@@ -255,14 +258,14 @@ class collect_info(object):
def __enter__(self):
if self.format_name:
- # Start the collector and patch PackageBase.do_install
+ # Start the collector and patch PackageInstaller._install_task
self.collector = InfoCollector(self.specs)
self.collector.__enter__()
def __exit__(self, exc_type, exc_val, exc_tb):
if self.format_name:
# Close the collector and restore the
- # original PackageBase.do_install
+ # original PackageInstaller._install_task
self.collector.__exit__(exc_type, exc_val, exc_tb)
report_data = {'specs': self.collector.specs}
diff --git a/lib/spack/spack/stage.py b/lib/spack/spack/stage.py
index 917d20aa04..b445638228 100644
--- a/lib/spack/spack/stage.py
+++ b/lib/spack/spack/stage.py
@@ -307,8 +307,9 @@ class Stage(object):
lock_id = prefix_bits(sha1, bit_length(sys.maxsize))
stage_lock_path = os.path.join(get_stage_root(), '.lock')
+ tty.debug("Creating stage lock {0}".format(self.name))
Stage.stage_locks[self.name] = spack.util.lock.Lock(
- stage_lock_path, lock_id, 1)
+ stage_lock_path, lock_id, 1, desc=self.name)
self._lock = Stage.stage_locks[self.name]
diff --git a/lib/spack/spack/test/buildtask.py b/lib/spack/spack/test/buildtask.py
new file mode 100644
index 0000000000..dcf3d29a6f
--- /dev/null
+++ b/lib/spack/spack/test/buildtask.py
@@ -0,0 +1,66 @@
+# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
+# Spack Project Developers. See the top-level COPYRIGHT file for details.
+#
+# SPDX-License-Identifier: (Apache-2.0 OR MIT)
+
+import pytest
+
+import spack.installer as inst
+import spack.repo
+import spack.spec
+
+
+def test_build_task_errors(install_mockery):
+ with pytest.raises(ValueError, match='must be a package'):
+ inst.BuildTask('abc', False, 0, 0, 0, [])
+
+ pkg = spack.repo.get('trivial-install-test-package')
+ with pytest.raises(ValueError, match='must have a concrete spec'):
+ inst.BuildTask(pkg, False, 0, 0, 0, [])
+
+ spec = spack.spec.Spec('trivial-install-test-package')
+ spec.concretize()
+ assert spec.concrete
+ with pytest.raises(inst.InstallError, match='Cannot create a build task'):
+ inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_REMOVED, [])
+
+
+def test_build_task_basics(install_mockery):
+ spec = spack.spec.Spec('dependent-install')
+ spec.concretize()
+ assert spec.concrete
+
+ # Ensure key properties match expectations
+ task = inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_ADDED, [])
+ assert task.priority == len(task.uninstalled_deps)
+ assert task.key == (task.priority, task.sequence)
+
+ # Ensure flagging installed works as expected
+ assert len(task.uninstalled_deps) > 0
+ assert task.dependencies == task.uninstalled_deps
+ task.flag_installed(task.dependencies)
+ assert len(task.uninstalled_deps) == 0
+ assert task.priority == 0
+
+
+def test_build_task_strings(install_mockery):
+ """Tests of build_task repr and str for coverage purposes."""
+ # Using a package with one dependency
+ spec = spack.spec.Spec('dependent-install')
+ spec.concretize()
+ assert spec.concrete
+
+ # Ensure key properties match expectations
+ task = inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_ADDED, [])
+
+ # Cover __repr__
+ irep = task.__repr__()
+ assert irep.startswith(task.__class__.__name__)
+ assert "status='queued'" in irep # == STATUS_ADDED
+ assert "sequence=" in irep
+
+ # Cover __str__
+ istr = str(task)
+ assert "status=queued" in istr # == STATUS_ADDED
+ assert "#dependencies=1" in istr
+ assert "priority=" in istr
diff --git a/lib/spack/spack/test/cmd/env.py b/lib/spack/spack/test/cmd/env.py
index 9e8c424ce0..841e6e20c8 100644
--- a/lib/spack/spack/test/cmd/env.py
+++ b/lib/spack/spack/test/cmd/env.py
@@ -169,9 +169,11 @@ def test_env_install_same_spec_twice(install_mockery, mock_fetch, capfd):
e = ev.read('test')
with capfd.disabled():
with e:
+ # The first installation outputs the package prefix
install('cmake-client')
+ # The second installation attempt will also update the view
out = install('cmake-client')
- assert 'is already installed in' in out
+ assert 'Updating view at' in out
def test_remove_after_concretize():
diff --git a/lib/spack/spack/test/cmd/install.py b/lib/spack/spack/test/cmd/install.py
index 4c6c0860aa..2697a539bf 100644
--- a/lib/spack/spack/test/cmd/install.py
+++ b/lib/spack/spack/test/cmd/install.py
@@ -139,8 +139,7 @@ def test_install_output_on_build_error(mock_packages, mock_archive, mock_fetch,
# capfd interferes with Spack's capturing
with capfd.disabled():
out = install('build-error', fail_on_error=False)
- assert isinstance(install.error, spack.build_environment.ChildError)
- assert install.error.name == 'ProcessError'
+ assert 'ProcessError' in out
assert 'configure: error: in /path/to/some/file:' in out
assert 'configure: error: cannot run C compiled programs.' in out
@@ -177,9 +176,10 @@ def test_show_log_on_error(mock_packages, mock_archive, mock_fetch,
assert install.error.pkg.name == 'build-error'
assert 'Full build log:' in out
+ # Message shows up for ProcessError (1), ChildError (1), and output (1)
errors = [line for line in out.split('\n')
if 'configure: error: cannot run C compiled programs' in line]
- assert len(errors) == 2
+ assert len(errors) == 3
def test_install_overwrite(
@@ -373,8 +373,12 @@ def test_junit_output_with_errors(
exc_type = getattr(builtins, exc_typename)
raise exc_type(msg)
- monkeypatch.setattr(spack.package.PackageBase, 'do_install', just_throw)
+ monkeypatch.setattr(spack.installer.PackageInstaller, '_install_task',
+ just_throw)
+ # TODO: Why does junit output capture appear to swallow the exception
+ # TODO: as evidenced by the two failing packages getting tagged as
+ # TODO: installed?
with tmpdir.as_cwd():
install('--log-format=junit', '--log-file=test.xml', 'libdwarf')
@@ -384,14 +388,14 @@ def test_junit_output_with_errors(
content = filename.open().read()
- # Count failures and errors correctly
- assert 'tests="1"' in content
+ # Count failures and errors correctly: libdwarf _and_ libelf
+ assert 'tests="2"' in content
assert 'failures="0"' in content
- assert 'errors="1"' in content
+ assert 'errors="2"' in content
# We want to have both stdout and stderr
assert '<system-out>' in content
- assert msg in content
+ assert 'error message="{0}"'.format(msg) in content
@pytest.mark.usefixtures('noop_install', 'config')
@@ -478,9 +482,8 @@ def test_cdash_upload_build_error(tmpdir, mock_fetch, install_mockery,
@pytest.mark.disable_clean_stage_check
-def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery,
- capfd):
- # capfd interferes with Spack's capturing
+def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery, capfd):
+ # capfd interferes with Spack's capturing of e.g., Build.xml output
with capfd.disabled():
with tmpdir.as_cwd():
install(
@@ -498,7 +501,7 @@ def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery,
@pytest.mark.disable_clean_stage_check
def test_cdash_upload_extra_params(tmpdir, mock_fetch, install_mockery, capfd):
- # capfd interferes with Spack's capturing
+ # capfd interferes with Spack's capture of e.g., Build.xml output
with capfd.disabled():
with tmpdir.as_cwd():
install(
@@ -520,7 +523,7 @@ def test_cdash_upload_extra_params(tmpdir, mock_fetch, install_mockery, capfd):
@pytest.mark.disable_clean_stage_check
def test_cdash_buildstamp_param(tmpdir, mock_fetch, install_mockery, capfd):
- # capfd interferes with Spack's capturing
+ # capfd interferes with Spack's capture of e.g., Build.xml output
with capfd.disabled():
with tmpdir.as_cwd():
cdash_track = 'some_mocked_track'
@@ -569,7 +572,6 @@ def test_cdash_install_from_spec_yaml(tmpdir, mock_fetch, install_mockery,
report_file = report_dir.join('a_Configure.xml')
assert report_file in report_dir.listdir()
content = report_file.open().read()
- import re
install_command_regex = re.compile(
r'<ConfigureCommand>(.+)</ConfigureCommand>',
re.MULTILINE | re.DOTALL)
@@ -599,6 +601,7 @@ def test_build_warning_output(tmpdir, mock_fetch, install_mockery, capfd):
msg = ''
try:
install('build-warnings')
+ assert False, "no exception was raised!"
except spack.build_environment.ChildError as e:
msg = e.long_message
@@ -607,12 +610,16 @@ def test_build_warning_output(tmpdir, mock_fetch, install_mockery, capfd):
def test_cache_only_fails(tmpdir, mock_fetch, install_mockery, capfd):
+ msg = ''
with capfd.disabled():
try:
install('--cache-only', 'libdwarf')
- assert False
- except spack.main.SpackCommandError:
- pass
+ except spack.installer.InstallError as e:
+ msg = str(e)
+
+ # libelf from cache failed to install, which automatically removed the
+ # the libdwarf build task and flagged the package as failed to install.
+ assert 'Installation of libdwarf failed' in msg
def test_install_only_dependencies(tmpdir, mock_fetch, install_mockery):
diff --git a/lib/spack/spack/test/conftest.py b/lib/spack/spack/test/conftest.py
index 97bbb69b52..04e870d336 100644
--- a/lib/spack/spack/test/conftest.py
+++ b/lib/spack/spack/test/conftest.py
@@ -28,6 +28,7 @@ import spack.caches
import spack.database
import spack.directory_layout
import spack.environment as ev
+import spack.package
import spack.package_prefs
import spack.paths
import spack.platforms.test
@@ -38,7 +39,6 @@ import spack.util.gpg
from spack.util.pattern import Bunch
from spack.dependency import Dependency
-from spack.package import PackageBase
from spack.fetch_strategy import FetchStrategyComposite, URLFetchStrategy
from spack.fetch_strategy import FetchError
from spack.spec import Spec
@@ -329,8 +329,18 @@ def mock_repo_path():
yield spack.repo.RepoPath(spack.paths.mock_packages_path)
+@pytest.fixture
+def mock_pkg_install(monkeypatch):
+ def _pkg_install_fn(pkg, spec, prefix):
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.bin)
+
+ monkeypatch.setattr(spack.package.PackageBase, 'install', _pkg_install_fn,
+ raising=False)
+
+
@pytest.fixture(scope='function')
-def mock_packages(mock_repo_path):
+def mock_packages(mock_repo_path, mock_pkg_install):
"""Use the 'builtin.mock' repository instead of 'builtin'"""
with use_repo(mock_repo_path):
yield mock_repo_path
@@ -599,10 +609,10 @@ def mock_fetch(mock_archive):
def fake_fn(self):
return fetcher
- orig_fn = PackageBase.fetcher
- PackageBase.fetcher = fake_fn
+ orig_fn = spack.package.PackageBase.fetcher
+ spack.package.PackageBase.fetcher = fake_fn
yield
- PackageBase.fetcher = orig_fn
+ spack.package.PackageBase.fetcher = orig_fn
class MockLayout(object):
diff --git a/lib/spack/spack/test/database.py b/lib/spack/spack/test/database.py
index 1af125a723..8f0f4dff6a 100644
--- a/lib/spack/spack/test/database.py
+++ b/lib/spack/spack/test/database.py
@@ -14,6 +14,7 @@ import os
import pytest
import json
+import llnl.util.lock as lk
from llnl.util.tty.colify import colify
import spack.repo
@@ -749,3 +750,118 @@ def test_query_spec_with_non_conditional_virtual_dependency(database):
# dependency that are not conditional on variants
results = spack.store.db.query_local('mpileaks ^mpich')
assert len(results) == 1
+
+
+def test_failed_spec_path_error(database):
+ """Ensure spec not concrete check is covered."""
+ s = spack.spec.Spec('a')
+ with pytest.raises(ValueError, matches='Concrete spec required'):
+ spack.store.db._failed_spec_path(s)
+
+
+@pytest.mark.db
+def test_clear_failure_keep(mutable_database, monkeypatch, capfd):
+ """Add test coverage for clear_failure operation when to be retained."""
+ def _is(db, spec):
+ return True
+
+ # Pretend the spec has been failure locked
+ monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
+
+ s = spack.spec.Spec('a')
+ spack.store.db.clear_failure(s)
+ out = capfd.readouterr()[0]
+ assert 'Retaining failure marking' in out
+
+
+@pytest.mark.db
+def test_clear_failure_forced(mutable_database, monkeypatch, capfd):
+ """Add test coverage for clear_failure operation when force."""
+ def _is(db, spec):
+ return True
+
+ # Pretend the spec has been failure locked
+ monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
+ # Ensure raise OSError when try to remove the non-existent marking
+ monkeypatch.setattr(spack.database.Database, 'prefix_failure_marked', _is)
+
+ s = spack.spec.Spec('a').concretized()
+ spack.store.db.clear_failure(s, force=True)
+ out = capfd.readouterr()[1]
+ assert 'Removing failure marking despite lock' in out
+ assert 'Unable to remove failure marking' in out
+
+
+@pytest.mark.db
+def test_mark_failed(mutable_database, monkeypatch, tmpdir, capsys):
+ """Add coverage to mark_failed."""
+ def _raise_exc(lock):
+ raise lk.LockTimeoutError('Mock acquire_write failure')
+
+ # Ensure attempt to acquire write lock on the mark raises the exception
+ monkeypatch.setattr(lk.Lock, 'acquire_write', _raise_exc)
+
+ with tmpdir.as_cwd():
+ s = spack.spec.Spec('a').concretized()
+ spack.store.db.mark_failed(s)
+
+ out = str(capsys.readouterr()[1])
+ assert 'Unable to mark a as failed' in out
+
+ # Clean up the failure mark to ensure it does not interfere with other
+ # tests using the same spec.
+ del spack.store.db._prefix_failures[s.prefix]
+
+
+@pytest.mark.db
+def test_prefix_failed(mutable_database, monkeypatch):
+ """Add coverage to prefix_failed operation."""
+ def _is(db, spec):
+ return True
+
+ s = spack.spec.Spec('a').concretized()
+
+ # Confirm the spec is not already marked as failed
+ assert not spack.store.db.prefix_failed(s)
+
+ # Check that a failure entry is sufficient
+ spack.store.db._prefix_failures[s.prefix] = None
+ assert spack.store.db.prefix_failed(s)
+
+ # Remove the entry and check again
+ del spack.store.db._prefix_failures[s.prefix]
+ assert not spack.store.db.prefix_failed(s)
+
+ # Now pretend that the prefix failure is locked
+ monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
+ assert spack.store.db.prefix_failed(s)
+
+
+def test_prefix_read_lock_error(mutable_database, monkeypatch):
+ """Cover the prefix read lock exception."""
+ def _raise(db, spec):
+ raise lk.LockError('Mock lock error')
+
+ s = spack.spec.Spec('a').concretized()
+
+ # Ensure subsequent lock operations fail
+ monkeypatch.setattr(lk.Lock, 'acquire_read', _raise)
+
+ with pytest.raises(Exception):
+ with spack.store.db.prefix_read_lock(s):
+ assert False
+
+
+def test_prefix_write_lock_error(mutable_database, monkeypatch):
+ """Cover the prefix write lock exception."""
+ def _raise(db, spec):
+ raise lk.LockError('Mock lock error')
+
+ s = spack.spec.Spec('a').concretized()
+
+ # Ensure subsequent lock operations fail
+ monkeypatch.setattr(lk.Lock, 'acquire_write', _raise)
+
+ with pytest.raises(Exception):
+ with spack.store.db.prefix_write_lock(s):
+ assert False
diff --git a/lib/spack/spack/test/install.py b/lib/spack/spack/test/install.py
index 914ae527a0..f53a760f70 100644
--- a/lib/spack/spack/test/install.py
+++ b/lib/spack/spack/test/install.py
@@ -100,6 +100,9 @@ def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch):
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
spack.package.Package.remove_prefix = rm_prefix_checker.remove_prefix
+ # must clear failure markings for the package before re-installing it
+ spack.store.db.clear_failure(spec, True)
+
pkg.succeed = True
pkg.stage = MockStage(pkg.stage)
@@ -264,6 +267,9 @@ def test_partial_install_keep_prefix(install_mockery, mock_fetch):
pkg.do_install(keep_prefix=True)
assert os.path.exists(pkg.prefix)
+ # must clear failure markings for the package before re-installing it
+ spack.store.db.clear_failure(spec, True)
+
pkg.succeed = True # make the build succeed
pkg.stage = MockStage(pkg.stage)
pkg.do_install(keep_prefix=True)
@@ -300,12 +306,13 @@ def test_store(install_mockery, mock_fetch):
@pytest.mark.disable_clean_stage_check
-def test_failing_build(install_mockery, mock_fetch):
+def test_failing_build(install_mockery, mock_fetch, capfd):
spec = Spec('failing-build').concretized()
pkg = spec.package
with pytest.raises(spack.build_environment.ChildError):
pkg.do_install()
+ assert 'InstallError: Expected Failure' in capfd.readouterr()[0]
class MockInstallError(spack.error.SpackError):
@@ -432,7 +439,7 @@ def test_pkg_install_log(install_mockery):
# Attempt installing log without the build log file
with pytest.raises(IOError, match="No such file or directory"):
- spec.package.log()
+ spack.installer.log(spec.package)
# Set up mock build files and try again
log_path = spec.package.log_path
@@ -445,7 +452,7 @@ def test_pkg_install_log(install_mockery):
install_path = os.path.dirname(spec.package.install_log_path)
mkdirp(install_path)
- spec.package.log()
+ spack.installer.log(spec.package)
assert os.path.exists(spec.package.install_log_path)
assert os.path.exists(spec.package.install_env_path)
@@ -469,3 +476,14 @@ def test_unconcretized_install(install_mockery, mock_fetch, mock_packages):
with pytest.raises(ValueError, match="only patch concrete packages"):
spec.package.do_patch()
+
+
+def test_install_error():
+ try:
+ msg = 'test install error'
+ long_msg = 'this is the long version of test install error'
+ raise InstallError(msg, long_msg=long_msg)
+ except Exception as exc:
+ assert exc.__class__.__name__ == 'InstallError'
+ assert exc.message == msg
+ assert exc.long_message == long_msg
diff --git a/lib/spack/spack/test/installer.py b/lib/spack/spack/test/installer.py
new file mode 100644
index 0000000000..02d10fce41
--- /dev/null
+++ b/lib/spack/spack/test/installer.py
@@ -0,0 +1,579 @@
+# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
+# Spack Project Developers. See the top-level COPYRIGHT file for details.
+#
+# SPDX-License-Identifier: (Apache-2.0 OR MIT)
+
+import os
+import pytest
+
+import llnl.util.tty as tty
+
+import spack.binary_distribution
+import spack.compilers
+import spack.directory_layout as dl
+import spack.installer as inst
+import spack.util.lock as lk
+import spack.repo
+import spack.spec
+
+
+def _noop(*args, **kwargs):
+ """Generic monkeypatch no-op routine."""
+ pass
+
+
+def _none(*args, **kwargs):
+ """Generic monkeypatch function that always returns None."""
+ return None
+
+
+def _true(*args, **kwargs):
+ """Generic monkeypatch function that always returns True."""
+ return True
+
+
+def create_build_task(pkg):
+ """
+ Create a built task for the given (concretized) package
+
+ Args:
+ pkg (PackageBase): concretized package associated with the task
+
+ Return:
+ (BuildTask) A basic package build task
+ """
+ return inst.BuildTask(pkg, False, 0, 0, inst.STATUS_ADDED, [])
+
+
+def create_installer(spec_name):
+ """
+ Create an installer for the named spec
+
+ Args:
+ spec_name (str): Name of the explicit install spec
+
+ Return:
+ spec (Spec): concretized spec
+ installer (PackageInstaller): the associated package installer
+ """
+ spec = spack.spec.Spec(spec_name)
+ spec.concretize()
+ assert spec.concrete
+ return spec, inst.PackageInstaller(spec.package)
+
+
+@pytest.mark.parametrize('sec,result', [
+ (86400, "24h"),
+ (3600, "1h"),
+ (60, "1m"),
+ (1.802, "1.80s"),
+ (3723.456, "1h 2m 3.46s")])
+def test_hms(sec, result):
+ assert inst._hms(sec) == result
+
+
+def test_install_msg():
+ name = 'some-package'
+ pid = 123456
+ expected = "{0}: Installing {1}".format(pid, name)
+ assert inst.install_msg(name, pid) == expected
+
+
+def test_install_from_cache_errors(install_mockery, capsys):
+ """Test to ensure cover _install_from_cache errors."""
+ spec = spack.spec.Spec('trivial-install-test-package')
+ spec.concretize()
+ assert spec.concrete
+
+ # Check with cache-only
+ with pytest.raises(SystemExit):
+ inst._install_from_cache(spec.package, True, True)
+
+ captured = str(capsys.readouterr())
+ assert 'No binary' in captured
+ assert 'found when cache-only specified' in captured
+ assert not spec.package.installed_from_binary_cache
+
+ # Check when don't expect to install only from binary cache
+ assert not inst._install_from_cache(spec.package, False, True)
+ assert not spec.package.installed_from_binary_cache
+
+
+def test_install_from_cache_ok(install_mockery, monkeypatch):
+ """Test to ensure cover _install_from_cache to the return."""
+ spec = spack.spec.Spec('trivial-install-test-package')
+ spec.concretize()
+ monkeypatch.setattr(inst, '_try_install_from_binary_cache', _true)
+ monkeypatch.setattr(spack.hooks, 'post_install', _noop)
+
+ assert inst._install_from_cache(spec.package, True, True)
+
+
+def test_process_external_package_module(install_mockery, monkeypatch, capfd):
+ """Test to simply cover the external module message path."""
+ spec = spack.spec.Spec('trivial-install-test-package')
+ spec.concretize()
+ assert spec.concrete
+
+ # Ensure take the external module path WITHOUT any changes to the database
+ monkeypatch.setattr(spack.database.Database, 'get_record', _none)
+
+ spec.external_path = '/actual/external/path/not/checked'
+ spec.external_module = 'unchecked_module'
+ inst._process_external_package(spec.package, False)
+
+ out = capfd.readouterr()[0]
+ assert 'has external module in {0}'.format(spec.external_module) in out
+ assert 'is actually installed in {0}'.format(spec.external_path) in out
+
+
+def test_process_binary_cache_tarball_none(install_mockery, monkeypatch,
+ capfd):
+ """Tests to cover _process_binary_cache_tarball when no tarball."""
+ monkeypatch.setattr(spack.binary_distribution, 'download_tarball', _none)
+
+ pkg = spack.repo.get('trivial-install-test-package')
+ assert not inst._process_binary_cache_tarball(pkg, None, False)
+
+ assert 'exists in binary cache but' in capfd.readouterr()[0]
+
+
+def test_process_binary_cache_tarball_tar(install_mockery, monkeypatch, capfd):
+ """Tests to cover _process_binary_cache_tarball with a tar file."""
+ def _spec(spec):
+ return spec
+
+ # Skip binary distribution functionality since assume tested elsewhere
+ monkeypatch.setattr(spack.binary_distribution, 'download_tarball', _spec)
+ monkeypatch.setattr(spack.binary_distribution, 'extract_tarball', _noop)
+
+ # Skip database updates
+ monkeypatch.setattr(spack.database.Database, 'add', _noop)
+
+ spec = spack.spec.Spec('a').concretized()
+ assert inst._process_binary_cache_tarball(spec.package, spec, False)
+
+ assert 'Installing a from binary cache' in capfd.readouterr()[0]
+
+
+def test_installer_init_errors(install_mockery):
+ """Test to ensure cover installer constructor errors."""
+ with pytest.raises(ValueError, match='must be a package'):
+ inst.PackageInstaller('abc')
+
+ pkg = spack.repo.get('trivial-install-test-package')
+ with pytest.raises(ValueError, match='Can only install concrete'):
+ inst.PackageInstaller(pkg)
+
+
+def test_installer_strings(install_mockery):
+ """Tests of installer repr and str for coverage purposes."""
+ spec, installer = create_installer('trivial-install-test-package')
+
+ # Cover __repr__
+ irep = installer.__repr__()
+ assert irep.startswith(installer.__class__.__name__)
+ assert "installed=" in irep
+ assert "failed=" in irep
+
+ # Cover __str__
+ istr = str(installer)
+ assert "#tasks=0" in istr
+ assert "installed (0)" in istr
+ assert "failed (0)" in istr
+
+
+def test_installer_last_phase_error(install_mockery, capsys):
+ """Test to cover last phase error."""
+ spec = spack.spec.Spec('trivial-install-test-package')
+ spec.concretize()
+ assert spec.concrete
+ with pytest.raises(SystemExit):
+ installer = inst.PackageInstaller(spec.package)
+ installer.install(stop_at='badphase')
+
+ captured = capsys.readouterr()
+ assert 'is not an allowed phase' in str(captured)
+
+
+def test_installer_ensure_ready_errors(install_mockery):
+ """Test to cover _ensure_ready errors."""
+ spec, installer = create_installer('trivial-install-test-package')
+
+ fmt = r'cannot be installed locally.*{0}'
+ # Force an external package error
+ path, module = spec.external_path, spec.external_module
+ spec.external_path = '/actual/external/path/not/checked'
+ spec.external_module = 'unchecked_module'
+ msg = fmt.format('is external')
+ with pytest.raises(inst.ExternalPackageError, match=msg):
+ installer._ensure_install_ready(spec.package)
+
+ # Force an upstream package error
+ spec.external_path, spec.external_module = path, module
+ spec.package._installed_upstream = True
+ msg = fmt.format('is upstream')
+ with pytest.raises(inst.UpstreamPackageError, match=msg):
+ installer._ensure_install_ready(spec.package)
+
+ # Force an install lock error, which should occur naturally since
+ # we are calling an internal method prior to any lock-related setup
+ spec.package._installed_upstream = False
+ assert len(installer.locks) == 0
+ with pytest.raises(inst.InstallLockError, match=fmt.format('not locked')):
+ installer._ensure_install_ready(spec.package)
+
+
+def test_ensure_locked_have(install_mockery, tmpdir):
+ """Test to cover _ensure_locked when already have lock."""
+ spec, installer = create_installer('trivial-install-test-package')
+
+ with tmpdir.as_cwd():
+ lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
+ lock_type = 'read'
+ tpl = (lock_type, lock)
+ installer.locks[installer.pkg_id] = tpl
+ assert installer._ensure_locked(lock_type, spec.package) == tpl
+
+
+def test_package_id(install_mockery):
+ """Test to cover package_id functionality."""
+ pkg = spack.repo.get('trivial-install-test-package')
+ with pytest.raises(ValueError, matches='spec is not concretized'):
+ inst.package_id(pkg)
+
+ spec = spack.spec.Spec('trivial-install-test-package')
+ spec.concretize()
+ assert spec.concrete
+ pkg = spec.package
+ assert pkg.name in inst.package_id(pkg)
+
+
+def test_fake_install(install_mockery):
+ """Test to cover fake install basics."""
+ spec = spack.spec.Spec('trivial-install-test-package')
+ spec.concretize()
+ assert spec.concrete
+ pkg = spec.package
+ inst._do_fake_install(pkg)
+ assert os.path.isdir(pkg.prefix.lib)
+
+
+def test_packages_needed_to_bootstrap_compiler(install_mockery, monkeypatch):
+ """Test to cover most of _packages_needed_to_boostrap_compiler."""
+ # TODO: More work is needed to go beyond the dependency check
+ def _no_compilers(pkg, arch_spec):
+ return []
+
+ # Test path where no compiler packages returned
+ spec = spack.spec.Spec('trivial-install-test-package')
+ spec.concretize()
+ assert spec.concrete
+ packages = inst._packages_needed_to_bootstrap_compiler(spec.package)
+ assert not packages
+
+ # Test up to the dependency check
+ monkeypatch.setattr(spack.compilers, 'compilers_for_spec', _no_compilers)
+ with pytest.raises(spack.repo.UnknownPackageError, matches='not found'):
+ inst._packages_needed_to_bootstrap_compiler(spec.package)
+
+
+def test_dump_packages_deps(install_mockery, tmpdir):
+ """Test to add coverage to dump_packages."""
+ spec = spack.spec.Spec('simple-inheritance').concretized()
+ with tmpdir.as_cwd():
+ inst.dump_packages(spec, '.')
+
+
+def test_add_bootstrap_compilers(install_mockery, monkeypatch):
+ """Test to cover _add_bootstrap_compilers."""
+ def _pkgs(pkg):
+ spec = spack.spec.Spec('mpi').concretized()
+ return [(spec.package, True)]
+
+ spec, installer = create_installer('trivial-install-test-package')
+
+ monkeypatch.setattr(inst, '_packages_needed_to_bootstrap_compiler', _pkgs)
+ installer._add_bootstrap_compilers(spec.package)
+
+ ids = list(installer.build_tasks)
+ assert len(ids) == 1
+ task = installer.build_tasks[ids[0]]
+ assert task.compiler
+
+
+def test_prepare_for_install_on_installed(install_mockery, monkeypatch):
+ """Test of _prepare_for_install's early return for installed task path."""
+ spec, installer = create_installer('dependent-install')
+ task = create_build_task(spec.package)
+ installer.installed.add(task.pkg_id)
+
+ monkeypatch.setattr(inst.PackageInstaller, '_ensure_install_ready', _noop)
+ installer._prepare_for_install(task, True, True, False)
+
+
+def test_installer_init_queue(install_mockery):
+ """Test of installer queue functions."""
+ with spack.config.override('config:install_missing_compilers', True):
+ spec, installer = create_installer('dependent-install')
+ installer._init_queue(True, True)
+
+ ids = list(installer.build_tasks)
+ assert len(ids) == 2
+ assert 'dependency-install' in ids
+ assert 'dependent-install' in ids
+
+
+def test_install_task_use_cache(install_mockery, monkeypatch):
+ """Test _install_task to cover use_cache path."""
+ spec, installer = create_installer('trivial-install-test-package')
+ task = create_build_task(spec.package)
+
+ monkeypatch.setattr(inst, '_install_from_cache', _true)
+ installer._install_task(task)
+ assert spec.package.name in installer.installed
+
+
+def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
+ """Test _release_lock for supposed write lock with exception."""
+ spec, installer = create_installer('trivial-install-test-package')
+
+ pkg_id = 'test'
+ with tmpdir.as_cwd():
+ lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
+ installer.locks[pkg_id] = ('write', lock)
+ assert lock._writes == 0
+
+ installer._release_lock(pkg_id)
+ out = str(capsys.readouterr()[1])
+ msg = 'exception when releasing write lock for {0}'.format(pkg_id)
+ assert msg in out
+
+
+def test_requeue_task(install_mockery, capfd):
+ """Test to ensure cover _requeue_task."""
+ spec, installer = create_installer('a')
+ task = create_build_task(spec.package)
+
+ installer._requeue_task(task)
+
+ ids = list(installer.build_tasks)
+ assert len(ids) == 1
+ qtask = installer.build_tasks[ids[0]]
+ assert qtask.status == inst.STATUS_INSTALLING
+
+ out = capfd.readouterr()[0]
+ assert 'Installing a in progress by another process' in out
+
+
+def test_cleanup_all_tasks(install_mockery, monkeypatch):
+ """Test to ensure cover _cleanup_all_tasks."""
+ def _mktask(pkg):
+ return create_build_task(pkg)
+
+ def _rmtask(installer, pkg_id):
+ raise RuntimeError('Raise an exception to test except path')
+
+ spec, installer = create_installer('a')
+
+ # Cover task removal happy path
+ installer.build_tasks['a'] = _mktask(spec.package)
+ installer._cleanup_all_tasks()
+ assert len(installer.build_tasks) == 0
+
+ # Cover task removal exception path
+ installer.build_tasks['a'] = _mktask(spec.package)
+ monkeypatch.setattr(inst.PackageInstaller, '_remove_task', _rmtask)
+ installer._cleanup_all_tasks()
+ assert len(installer.build_tasks) == 1
+
+
+def test_cleanup_failed(install_mockery, tmpdir, monkeypatch, capsys):
+ """Test to increase coverage of _cleanup_failed."""
+ msg = 'Fake release_write exception'
+
+ def _raise_except(lock):
+ raise RuntimeError(msg)
+
+ spec, installer = create_installer('trivial-install-test-package')
+
+ monkeypatch.setattr(lk.Lock, 'release_write', _raise_except)
+ pkg_id = 'test'
+ with tmpdir.as_cwd():
+ lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
+ installer.failed[pkg_id] = lock
+
+ installer._cleanup_failed(pkg_id)
+ out = str(capsys.readouterr()[1])
+ assert 'exception when removing failure mark' in out
+ assert msg in out
+
+
+def test_update_failed_no_mark(install_mockery):
+ """Test of _update_failed sans mark and dependent build tasks."""
+ spec, installer = create_installer('dependent-install')
+ task = create_build_task(spec.package)
+
+ installer._update_failed(task)
+ assert installer.failed['dependent-install'] is None
+
+
+def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
+ """Test install with uninstalled dependencies."""
+ spec, installer = create_installer('dependent-install')
+
+ # Skip the actual installation and any status updates
+ monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
+ monkeypatch.setattr(inst.PackageInstaller, '_update_installed', _noop)
+ monkeypatch.setattr(inst.PackageInstaller, '_update_failed', _noop)
+
+ msg = 'Cannot proceed with dependent-install'
+ with pytest.raises(spack.installer.InstallError, matches=msg):
+ installer.install()
+
+ out = str(capsys.readouterr())
+ assert 'Detected uninstalled dependencies for' in out
+
+
+def test_install_failed(install_mockery, monkeypatch, capsys):
+ """Test install with failed install."""
+ spec, installer = create_installer('b')
+
+ # Make sure the package is identified as failed
+ monkeypatch.setattr(spack.database.Database, 'prefix_failed', _true)
+
+ # Skip the actual installation though it should never get there
+ monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
+
+ msg = 'Installation of b failed'
+ with pytest.raises(spack.installer.InstallError, matches=msg):
+ installer.install()
+
+ out = str(capsys.readouterr())
+ assert 'Warning: b failed to install' in out
+
+
+def test_install_lock_failures(install_mockery, monkeypatch, capfd):
+ """Cover basic install lock failure handling in a single pass."""
+ def _requeued(installer, task):
+ tty.msg('requeued {0}' .format(task.pkg.spec.name))
+
+ def _not_locked(installer, lock_type, pkg):
+ tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
+ return lock_type, None
+
+ spec, installer = create_installer('b')
+
+ # Ensure never acquire a lock
+ monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked', _not_locked)
+
+ # Ensure don't continually requeue the task
+ monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
+
+ # Skip the actual installation though should never reach it
+ monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
+
+ installer.install()
+ out = capfd.readouterr()[0]
+ expected = ['write locked', 'read locked', 'requeued']
+ for exp, ln in zip(expected, out.split('\n')):
+ assert exp in ln
+
+
+def test_install_lock_installed_requeue(install_mockery, monkeypatch, capfd):
+ """Cover basic install handling for installed package."""
+ def _install(installer, task, **kwargs):
+ tty.msg('{0} installing'.format(task.pkg.spec.name))
+
+ def _not_locked(installer, lock_type, pkg):
+ tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
+ return lock_type, None
+
+ def _prep(installer, task, keep_prefix, keep_stage, restage):
+ installer.installed.add('b')
+ tty.msg('{0} is installed' .format(task.pkg.spec.name))
+
+ # also do not allow the package to be locked again
+ monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked',
+ _not_locked)
+
+ def _requeued(installer, task):
+ tty.msg('requeued {0}' .format(task.pkg.spec.name))
+
+ # Skip the actual installation though should never reach it
+ monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
+
+ # Flag the package as installed
+ monkeypatch.setattr(inst.PackageInstaller, '_prepare_for_install', _prep)
+
+ # Ensure don't continually requeue the task
+ monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
+
+ spec, installer = create_installer('b')
+
+ installer.install()
+ assert 'b' not in installer.installed
+
+ out = capfd.readouterr()[0]
+ expected = ['is installed', 'read locked', 'requeued']
+ for exp, ln in zip(expected, out.split('\n')):
+ assert exp in ln
+
+
+def test_install_read_locked_requeue(install_mockery, monkeypatch, capfd):
+ """Cover basic read lock handling for uninstalled package with requeue."""
+ orig_fn = inst.PackageInstaller._ensure_locked
+
+ def _install(installer, task, **kwargs):
+ tty.msg('{0} installing'.format(task.pkg.spec.name))
+
+ def _read(installer, lock_type, pkg):
+ tty.msg('{0}->read locked {1}' .format(lock_type, pkg.spec.name))
+ return orig_fn(installer, 'read', pkg)
+
+ def _prep(installer, task, keep_prefix, keep_stage, restage):
+ tty.msg('preparing {0}' .format(task.pkg.spec.name))
+ assert task.pkg.spec.name not in installer.installed
+
+ def _requeued(installer, task):
+ tty.msg('requeued {0}' .format(task.pkg.spec.name))
+
+ # Force a read lock
+ monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked', _read)
+
+ # Skip the actual installation though should never reach it
+ monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
+
+ # Flag the package as installed
+ monkeypatch.setattr(inst.PackageInstaller, '_prepare_for_install', _prep)
+
+ # Ensure don't continually requeue the task
+ monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
+
+ spec, installer = create_installer('b')
+
+ installer.install()
+ assert 'b' not in installer.installed
+
+ out = capfd.readouterr()[0]
+ expected = ['write->read locked', 'preparing', 'requeued']
+ for exp, ln in zip(expected, out.split('\n')):
+ assert exp in ln
+
+
+def test_install_dir_exists(install_mockery, monkeypatch, capfd):
+ """Cover capture of install directory exists error."""
+ err = 'Mock directory exists error'
+
+ def _install(installer, task, **kwargs):
+ raise dl.InstallDirectoryAlreadyExistsError(err)
+
+ # Skip the actual installation though should never reach it
+ monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
+
+ spec, installer = create_installer('b')
+
+ with pytest.raises(dl.InstallDirectoryAlreadyExistsError, matches=err):
+ installer.install()
+
+ assert 'b' in installer.installed
diff --git a/lib/spack/spack/test/llnl/util/lock.py b/lib/spack/spack/test/llnl/util/lock.py
index 63f2b91782..7236e1dbf9 100644
--- a/lib/spack/spack/test/llnl/util/lock.py
+++ b/lib/spack/spack/test/llnl/util/lock.py
@@ -1240,3 +1240,57 @@ def test_lock_in_current_directory(tmpdir):
pass
with lk.WriteTransaction(lock):
pass
+
+
+def test_attempts_str():
+ assert lk._attempts_str(0, 0) == ''
+ assert lk._attempts_str(0.12, 1) == ''
+ assert lk._attempts_str(12.345, 2) == ' after 12.35s and 2 attempts'
+
+
+def test_lock_str():
+ lock = lk.Lock('lockfile')
+ lockstr = str(lock)
+ assert 'lockfile[0:0]' in lockstr
+ assert 'timeout=None' in lockstr
+ assert '#reads=0, #writes=0' in lockstr
+
+
+def test_downgrade_write_okay(tmpdir):
+ """Test the lock write-to-read downgrade operation."""
+ with tmpdir.as_cwd():
+ lock = lk.Lock('lockfile')
+ lock.acquire_write()
+ lock.downgrade_write_to_read()
+ assert lock._reads == 1
+ assert lock._writes == 0
+
+
+def test_downgrade_write_fails(tmpdir):
+ """Test failing the lock write-to-read downgrade operation."""
+ with tmpdir.as_cwd():
+ lock = lk.Lock('lockfile')
+ lock.acquire_read()
+ msg = 'Cannot downgrade lock from write to read on file: lockfile'
+ with pytest.raises(lk.LockDowngradeError, matches=msg):
+ lock.downgrade_write_to_read()
+
+
+def test_upgrade_read_okay(tmpdir):
+ """Test the lock read-to-write upgrade operation."""
+ with tmpdir.as_cwd():
+ lock = lk.Lock('lockfile')
+ lock.acquire_read()
+ lock.upgrade_read_to_write()
+ assert lock._reads == 0
+ assert lock._writes == 1
+
+
+def test_upgrade_read_fails(tmpdir):
+ """Test failing the lock read-to-write upgrade operation."""
+ with tmpdir.as_cwd():
+ lock = lk.Lock('lockfile')
+ lock.acquire_write()
+ msg = 'Cannot upgrade lock from read to write on file: lockfile'
+ with pytest.raises(lk.LockUpgradeError, matches=msg):
+ lock.upgrade_read_to_write()
diff --git a/var/spack/repos/builtin.mock/packages/a/package.py b/var/spack/repos/builtin.mock/packages/a/package.py
index 3eae25366a..04e69dcd91 100644
--- a/var/spack/repos/builtin.mock/packages/a/package.py
+++ b/var/spack/repos/builtin.mock/packages/a/package.py
@@ -49,4 +49,6 @@ class A(AutotoolsPackage):
pass
def install(self, spec, prefix):
- pass
+ # sanity_check_prefix requires something in the install directory
+ # Test requires overriding the one provided by `AutotoolsPackage`
+ mkdirp(prefix.bin)
diff --git a/var/spack/repos/builtin.mock/packages/b/package.py b/var/spack/repos/builtin.mock/packages/b/package.py
index 5fabc274d2..0dd6556e82 100644
--- a/var/spack/repos/builtin.mock/packages/b/package.py
+++ b/var/spack/repos/builtin.mock/packages/b/package.py
@@ -13,6 +13,3 @@ class B(Package):
url = "http://www.example.com/b-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/boost/package.py b/var/spack/repos/builtin.mock/packages/boost/package.py
index f946191f47..6d2cea3da9 100644
--- a/var/spack/repos/builtin.mock/packages/boost/package.py
+++ b/var/spack/repos/builtin.mock/packages/boost/package.py
@@ -59,6 +59,3 @@ class Boost(Package):
description="Build the Boost Graph library")
variant('taggedlayout', default=False,
description="Augment library names with build options")
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/c/package.py b/var/spack/repos/builtin.mock/packages/c/package.py
index 9942a297eb..835f9d408e 100644
--- a/var/spack/repos/builtin.mock/packages/c/package.py
+++ b/var/spack/repos/builtin.mock/packages/c/package.py
@@ -13,6 +13,3 @@ class C(Package):
url = "http://www.example.com/c-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/conflicting-dependent/package.py b/var/spack/repos/builtin.mock/packages/conflicting-dependent/package.py
index 8fa6c67d6e..3c0f120085 100644
--- a/var/spack/repos/builtin.mock/packages/conflicting-dependent/package.py
+++ b/var/spack/repos/builtin.mock/packages/conflicting-dependent/package.py
@@ -17,6 +17,3 @@ class ConflictingDependent(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dependency-install@:1.0')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid1/package.py b/var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid1/package.py
index 291fd887a6..2246bf0fce 100644
--- a/var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid1/package.py
+++ b/var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid1/package.py
@@ -25,6 +25,3 @@ X Y
# single patch file in repo
depends_on('patch', patches='mid1.patch')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid2/package.py b/var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid2/package.py
index fa53466440..9638872a11 100644
--- a/var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid2/package.py
+++ b/var/spack/repos/builtin.mock/packages/dep-diamond-patch-mid2/package.py
@@ -28,6 +28,3 @@ X Y
patch('http://example.com/urlpatch.patch',
sha256='mid21234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234'), # noqa: E501
])
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dep-diamond-patch-top/package.py b/var/spack/repos/builtin.mock/packages/dep-diamond-patch-top/package.py
index c966045952..fb86fa3ad3 100644
--- a/var/spack/repos/builtin.mock/packages/dep-diamond-patch-top/package.py
+++ b/var/spack/repos/builtin.mock/packages/dep-diamond-patch-top/package.py
@@ -27,6 +27,3 @@ X Y
depends_on('patch', patches='top.patch')
depends_on('dep-diamond-patch-mid1')
depends_on('dep-diamond-patch-mid2')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/develop-test/package.py b/var/spack/repos/builtin.mock/packages/develop-test/package.py
index 4a2fde701e..5c8820756d 100644
--- a/var/spack/repos/builtin.mock/packages/develop-test/package.py
+++ b/var/spack/repos/builtin.mock/packages/develop-test/package.py
@@ -13,6 +13,3 @@ class DevelopTest(Package):
version('develop', git='https://github.com/dummy/repo.git')
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/develop-test2/package.py b/var/spack/repos/builtin.mock/packages/develop-test2/package.py
index 464219f603..b3a808206d 100644
--- a/var/spack/repos/builtin.mock/packages/develop-test2/package.py
+++ b/var/spack/repos/builtin.mock/packages/develop-test2/package.py
@@ -13,6 +13,3 @@ class DevelopTest2(Package):
version('0.2.15.develop', git='https://github.com/dummy/repo.git')
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/direct-mpich/package.py b/var/spack/repos/builtin.mock/packages/direct-mpich/package.py
index 4db94c66c2..940ecde224 100644
--- a/var/spack/repos/builtin.mock/packages/direct-mpich/package.py
+++ b/var/spack/repos/builtin.mock/packages/direct-mpich/package.py
@@ -13,6 +13,3 @@ class DirectMpich(Package):
version('1.0', 'foobarbaz')
depends_on('mpich')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dt-diamond-bottom/package.py b/var/spack/repos/builtin.mock/packages/dt-diamond-bottom/package.py
index 85f661a485..4ad109a272 100644
--- a/var/spack/repos/builtin.mock/packages/dt-diamond-bottom/package.py
+++ b/var/spack/repos/builtin.mock/packages/dt-diamond-bottom/package.py
@@ -12,6 +12,3 @@ class DtDiamondBottom(Package):
url = "http://www.example.com/dt-diamond-bottom-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dt-diamond-left/package.py b/var/spack/repos/builtin.mock/packages/dt-diamond-left/package.py
index d0ed4efaeb..9e3710c210 100644
--- a/var/spack/repos/builtin.mock/packages/dt-diamond-left/package.py
+++ b/var/spack/repos/builtin.mock/packages/dt-diamond-left/package.py
@@ -14,6 +14,3 @@ class DtDiamondLeft(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dt-diamond-bottom', type='build')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dt-diamond-right/package.py b/var/spack/repos/builtin.mock/packages/dt-diamond-right/package.py
index c7842ead47..4fd5fa733c 100644
--- a/var/spack/repos/builtin.mock/packages/dt-diamond-right/package.py
+++ b/var/spack/repos/builtin.mock/packages/dt-diamond-right/package.py
@@ -14,6 +14,3 @@ class DtDiamondRight(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dt-diamond-bottom', type=('build', 'link', 'run'))
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dt-diamond/package.py b/var/spack/repos/builtin.mock/packages/dt-diamond/package.py
index 9a441a953b..90afdd2400 100644
--- a/var/spack/repos/builtin.mock/packages/dt-diamond/package.py
+++ b/var/spack/repos/builtin.mock/packages/dt-diamond/package.py
@@ -15,6 +15,3 @@ class DtDiamond(Package):
depends_on('dt-diamond-left')
depends_on('dt-diamond-right')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtbuild1/package.py b/var/spack/repos/builtin.mock/packages/dtbuild1/package.py
index 07e86bb3e0..f1921ad43f 100644
--- a/var/spack/repos/builtin.mock/packages/dtbuild1/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtbuild1/package.py
@@ -18,6 +18,3 @@ class Dtbuild1(Package):
depends_on('dtbuild2', type='build')
depends_on('dtlink2')
depends_on('dtrun2', type='run')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtbuild2/package.py b/var/spack/repos/builtin.mock/packages/dtbuild2/package.py
index 752a34c900..cf783ccd9b 100644
--- a/var/spack/repos/builtin.mock/packages/dtbuild2/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtbuild2/package.py
@@ -13,6 +13,3 @@ class Dtbuild2(Package):
url = "http://www.example.com/dtbuild2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtbuild3/package.py b/var/spack/repos/builtin.mock/packages/dtbuild3/package.py
index 23fd06bcb9..1b8f89a2da 100644
--- a/var/spack/repos/builtin.mock/packages/dtbuild3/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtbuild3/package.py
@@ -13,6 +13,3 @@ class Dtbuild3(Package):
url = "http://www.example.com/dtbuild3-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtlink1/package.py b/var/spack/repos/builtin.mock/packages/dtlink1/package.py
index 0235591381..725d3b2061 100644
--- a/var/spack/repos/builtin.mock/packages/dtlink1/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtlink1/package.py
@@ -15,6 +15,3 @@ class Dtlink1(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dtlink3')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtlink2/package.py b/var/spack/repos/builtin.mock/packages/dtlink2/package.py
index 1f629d3d6b..b4d871a841 100644
--- a/var/spack/repos/builtin.mock/packages/dtlink2/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtlink2/package.py
@@ -13,6 +13,3 @@ class Dtlink2(Package):
url = "http://www.example.com/dtlink2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtlink3/package.py b/var/spack/repos/builtin.mock/packages/dtlink3/package.py
index dc0d2532ec..732b68f867 100644
--- a/var/spack/repos/builtin.mock/packages/dtlink3/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtlink3/package.py
@@ -16,6 +16,3 @@ class Dtlink3(Package):
depends_on('dtbuild2', type='build')
depends_on('dtlink4')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtlink4/package.py b/var/spack/repos/builtin.mock/packages/dtlink4/package.py
index 2ec4ddd7e2..d7ac8115ad 100644
--- a/var/spack/repos/builtin.mock/packages/dtlink4/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtlink4/package.py
@@ -13,6 +13,3 @@ class Dtlink4(Package):
url = "http://www.example.com/dtlink4-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtlink5/package.py b/var/spack/repos/builtin.mock/packages/dtlink5/package.py
index cb191f32f0..faf429bfc1 100644
--- a/var/spack/repos/builtin.mock/packages/dtlink5/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtlink5/package.py
@@ -13,6 +13,3 @@ class Dtlink5(Package):
url = "http://www.example.com/dtlink5-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtrun1/package.py b/var/spack/repos/builtin.mock/packages/dtrun1/package.py
index fcc26022ca..617d9f3b8a 100644
--- a/var/spack/repos/builtin.mock/packages/dtrun1/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtrun1/package.py
@@ -16,6 +16,3 @@ class Dtrun1(Package):
depends_on('dtlink5')
depends_on('dtrun3', type='run')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtrun2/package.py b/var/spack/repos/builtin.mock/packages/dtrun2/package.py
index f455af1bb7..1ba63a2822 100644
--- a/var/spack/repos/builtin.mock/packages/dtrun2/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtrun2/package.py
@@ -13,6 +13,3 @@ class Dtrun2(Package):
url = "http://www.example.com/dtrun2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtrun3/package.py b/var/spack/repos/builtin.mock/packages/dtrun3/package.py
index c884ef6b46..c1caea1dde 100644
--- a/var/spack/repos/builtin.mock/packages/dtrun3/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtrun3/package.py
@@ -15,6 +15,3 @@ class Dtrun3(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dtbuild3', type='build')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dttop/package.py b/var/spack/repos/builtin.mock/packages/dttop/package.py
index 225af9fee4..120e70e40c 100644
--- a/var/spack/repos/builtin.mock/packages/dttop/package.py
+++ b/var/spack/repos/builtin.mock/packages/dttop/package.py
@@ -17,6 +17,3 @@ class Dttop(Package):
depends_on('dtbuild1', type='build')
depends_on('dtlink1')
depends_on('dtrun1', type='run')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/dtuse/package.py b/var/spack/repos/builtin.mock/packages/dtuse/package.py
index 083cb1ba76..0a5836d0f8 100644
--- a/var/spack/repos/builtin.mock/packages/dtuse/package.py
+++ b/var/spack/repos/builtin.mock/packages/dtuse/package.py
@@ -15,6 +15,3 @@ class Dtuse(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dttop')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/e/package.py b/var/spack/repos/builtin.mock/packages/e/package.py
index c85c78e137..d52db11abe 100644
--- a/var/spack/repos/builtin.mock/packages/e/package.py
+++ b/var/spack/repos/builtin.mock/packages/e/package.py
@@ -13,6 +13,3 @@ class E(Package):
url = "http://www.example.com/e-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/externalmodule/package.py b/var/spack/repos/builtin.mock/packages/externalmodule/package.py
index f871762c7e..675edbc23d 100644
--- a/var/spack/repos/builtin.mock/packages/externalmodule/package.py
+++ b/var/spack/repos/builtin.mock/packages/externalmodule/package.py
@@ -13,6 +13,3 @@ class Externalmodule(Package):
version('1.0', '1234567890abcdef1234567890abcdef')
depends_on('externalprereq')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/externalprereq/package.py b/var/spack/repos/builtin.mock/packages/externalprereq/package.py
index 866df9f838..595bc8b3b7 100644
--- a/var/spack/repos/builtin.mock/packages/externalprereq/package.py
+++ b/var/spack/repos/builtin.mock/packages/externalprereq/package.py
@@ -11,6 +11,3 @@ class Externalprereq(Package):
url = "http://somewhere.com/prereq-1.0.tar.gz"
version('1.4', 'f1234567890abcdef1234567890abcde')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/externaltool/package.py b/var/spack/repos/builtin.mock/packages/externaltool/package.py
index 2d6caef35d..4677dfeda9 100644
--- a/var/spack/repos/builtin.mock/packages/externaltool/package.py
+++ b/var/spack/repos/builtin.mock/packages/externaltool/package.py
@@ -14,6 +14,3 @@ class Externaltool(Package):
version('0.9', '1234567890abcdef1234567890abcdef')
depends_on('externalprereq')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/externalvirtual/package.py b/var/spack/repos/builtin.mock/packages/externalvirtual/package.py
index e16997dbd8..aace40767d 100644
--- a/var/spack/repos/builtin.mock/packages/externalvirtual/package.py
+++ b/var/spack/repos/builtin.mock/packages/externalvirtual/package.py
@@ -16,6 +16,3 @@ class Externalvirtual(Package):
version('2.2', '4567890abcdef1234567890abcdef123')
provides('stuff', when='@1.0:')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/fake/package.py b/var/spack/repos/builtin.mock/packages/fake/package.py
index e022af1766..16ff3c6dd6 100644
--- a/var/spack/repos/builtin.mock/packages/fake/package.py
+++ b/var/spack/repos/builtin.mock/packages/fake/package.py
@@ -11,6 +11,3 @@ class Fake(Package):
url = "http://www.fake-spack-example.org/downloads/fake-1.0.tar.gz"
version('1.0', 'foobarbaz')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/flake8/package.py b/var/spack/repos/builtin.mock/packages/flake8/package.py
index 2de7bd8e12..c548ca69b2 100644
--- a/var/spack/repos/builtin.mock/packages/flake8/package.py
+++ b/var/spack/repos/builtin.mock/packages/flake8/package.py
@@ -58,7 +58,11 @@ class Flake8(Package):
if 'really-long-if-statement' != 'that-goes-over-the-line-length-limit-and-requires-noqa': # noqa
pass
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.bin)
+
# '@when' decorated functions are exempt from redefinition errors
@when('@2.0')
def install(self, spec, prefix):
- pass
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.bin)
diff --git a/var/spack/repos/builtin.mock/packages/git-svn-top-level/package.py b/var/spack/repos/builtin.mock/packages/git-svn-top-level/package.py
index 72e7b1fe89..e9acff3c6d 100644
--- a/var/spack/repos/builtin.mock/packages/git-svn-top-level/package.py
+++ b/var/spack/repos/builtin.mock/packages/git-svn-top-level/package.py
@@ -15,6 +15,3 @@ class GitSvnTopLevel(Package):
svn = 'https://example.com/some/svn/repo'
version('2.0')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/git-test/package.py b/var/spack/repos/builtin.mock/packages/git-test/package.py
index a5aa428664..8430966282 100644
--- a/var/spack/repos/builtin.mock/packages/git-test/package.py
+++ b/var/spack/repos/builtin.mock/packages/git-test/package.py
@@ -11,6 +11,3 @@ class GitTest(Package):
homepage = "http://www.git-fetch-example.com"
version('git', git='to-be-filled-in-by-test')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/git-top-level/package.py b/var/spack/repos/builtin.mock/packages/git-top-level/package.py
index 1e511628f3..ba80263224 100644
--- a/var/spack/repos/builtin.mock/packages/git-top-level/package.py
+++ b/var/spack/repos/builtin.mock/packages/git-top-level/package.py
@@ -12,6 +12,3 @@ class GitTopLevel(Package):
git = 'https://example.com/some/git/repo'
version('1.0')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/git-url-svn-top-level/package.py b/var/spack/repos/builtin.mock/packages/git-url-svn-top-level/package.py
index e1c7476e3b..b998c0f014 100644
--- a/var/spack/repos/builtin.mock/packages/git-url-svn-top-level/package.py
+++ b/var/spack/repos/builtin.mock/packages/git-url-svn-top-level/package.py
@@ -16,6 +16,3 @@ class GitUrlSvnTopLevel(Package):
svn = 'https://example.com/some/svn/repo'
version('2.0')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/git-url-top-level/package.py b/var/spack/repos/builtin.mock/packages/git-url-top-level/package.py
index 2eacba15b7..d93ead7bee 100644
--- a/var/spack/repos/builtin.mock/packages/git-url-top-level/package.py
+++ b/var/spack/repos/builtin.mock/packages/git-url-top-level/package.py
@@ -38,6 +38,3 @@ class GitUrlTopLevel(Package):
version('1.2', sha512='abc12', branch='releases/v1.2')
version('1.1', md5='abc11', tag='v1.1')
version('1.0', 'abc11', tag='abc123')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/hash-test1/package.py b/var/spack/repos/builtin.mock/packages/hash-test1/package.py
index db3616145b..c46f2339ba 100644
--- a/var/spack/repos/builtin.mock/packages/hash-test1/package.py
+++ b/var/spack/repos/builtin.mock/packages/hash-test1/package.py
@@ -3,10 +3,10 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
-from spack import *
-
import os
+from spack import *
+
class HashTest1(Package):
"""Used to test package hashing
@@ -37,10 +37,16 @@ class HashTest1(Package):
print("install 1")
os.listdir(os.getcwd())
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.bin)
+
@when('@1.5:')
def install(self, spec, prefix):
os.listdir(os.getcwd())
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.bin)
+
@when('@1.5,1.6')
def extra_phase(self, spec, prefix):
pass
diff --git a/var/spack/repos/builtin.mock/packages/hash-test2/package.py b/var/spack/repos/builtin.mock/packages/hash-test2/package.py
index 859eb86c95..2358cd66e2 100644
--- a/var/spack/repos/builtin.mock/packages/hash-test2/package.py
+++ b/var/spack/repos/builtin.mock/packages/hash-test2/package.py
@@ -3,10 +3,10 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
-from spack import *
-
import os
+from spack import *
+
class HashTest2(Package):
"""Used to test package hashing
@@ -31,3 +31,6 @@ class HashTest2(Package):
def install(self, spec, prefix):
print("install 1")
os.listdir(os.getcwd())
+
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.bin)
diff --git a/var/spack/repos/builtin.mock/packages/hash-test3/package.py b/var/spack/repos/builtin.mock/packages/hash-test3/package.py
index 7c452c3ba1..e943bd46c7 100644
--- a/var/spack/repos/builtin.mock/packages/hash-test3/package.py
+++ b/var/spack/repos/builtin.mock/packages/hash-test3/package.py
@@ -3,10 +3,10 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
-from spack import *
-
import os
+from spack import *
+
class HashTest3(Package):
"""Used to test package hashing
@@ -32,10 +32,16 @@ class HashTest3(Package):
print("install 1")
os.listdir(os.getcwd())
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.bin)
+
@when('@1.5:')
def install(self, spec, prefix):
os.listdir(os.getcwd())
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.bin)
+
for _version_constraint in ['@1.5', '@1.6']:
@when(_version_constraint)
def extra_phase(self, spec, prefix):
diff --git a/var/spack/repos/builtin.mock/packages/hg-test/package.py b/var/spack/repos/builtin.mock/packages/hg-test/package.py
index ede967ce0c..67970d4272 100644
--- a/var/spack/repos/builtin.mock/packages/hg-test/package.py
+++ b/var/spack/repos/builtin.mock/packages/hg-test/package.py
@@ -11,6 +11,3 @@ class HgTest(Package):
homepage = "http://www.hg-fetch-example.com"
version('hg', hg='to-be-filled-in-by-test')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/hg-top-level/package.py b/var/spack/repos/builtin.mock/packages/hg-top-level/package.py
index 6f402c801b..0f23972592 100644
--- a/var/spack/repos/builtin.mock/packages/hg-top-level/package.py
+++ b/var/spack/repos/builtin.mock/packages/hg-top-level/package.py
@@ -12,6 +12,3 @@ class HgTopLevel(Package):
hg = 'https://example.com/some/hg/repo'
version('1.0')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/hypre/package.py b/var/spack/repos/builtin.mock/packages/hypre/package.py
index 27c641017a..bf48afd971 100644
--- a/var/spack/repos/builtin.mock/packages/hypre/package.py
+++ b/var/spack/repos/builtin.mock/packages/hypre/package.py
@@ -16,6 +16,3 @@ class Hypre(Package):
depends_on('lapack')
depends_on('blas')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/indirect-mpich/package.py b/var/spack/repos/builtin.mock/packages/indirect-mpich/package.py
index 5a06f77dd3..34bf4c64a0 100644
--- a/var/spack/repos/builtin.mock/packages/indirect-mpich/package.py
+++ b/var/spack/repos/builtin.mock/packages/indirect-mpich/package.py
@@ -18,6 +18,3 @@ class IndirectMpich(Package):
depends_on('mpi')
depends_on('direct-mpich')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/maintainers-1/package.py b/var/spack/repos/builtin.mock/packages/maintainers-1/package.py
index 7942ff3de7..4c223548dd 100644
--- a/var/spack/repos/builtin.mock/packages/maintainers-1/package.py
+++ b/var/spack/repos/builtin.mock/packages/maintainers-1/package.py
@@ -15,6 +15,3 @@ class Maintainers1(Package):
maintainers = ['user1', 'user2']
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/maintainers-2/package.py b/var/spack/repos/builtin.mock/packages/maintainers-2/package.py
index d16f9e30a6..8121461858 100644
--- a/var/spack/repos/builtin.mock/packages/maintainers-2/package.py
+++ b/var/spack/repos/builtin.mock/packages/maintainers-2/package.py
@@ -15,6 +15,3 @@ class Maintainers2(Package):
maintainers = ['user2', 'user3']
version('1.0', '0123456789abcdef0123456789abcdef')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/mixedversions/package.py b/var/spack/repos/builtin.mock/packages/mixedversions/package.py
index 446278403b..e0c61ed089 100644
--- a/var/spack/repos/builtin.mock/packages/mixedversions/package.py
+++ b/var/spack/repos/builtin.mock/packages/mixedversions/package.py
@@ -12,6 +12,3 @@ class Mixedversions(Package):
version('2.0.1', 'hashc')
version('2.0', 'hashb')
version('1.0.1', 'hasha')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/module-path-separator/package.py b/var/spack/repos/builtin.mock/packages/module-path-separator/package.py
index bb65705d1c..3b4e2ca9d0 100644
--- a/var/spack/repos/builtin.mock/packages/module-path-separator/package.py
+++ b/var/spack/repos/builtin.mock/packages/module-path-separator/package.py
@@ -12,9 +12,6 @@ class ModulePathSeparator(Package):
version(1.0, 'foobarbaz')
- def install(self, spec, prefix):
- pass
-
def setup_environment(self, senv, renv):
renv.append_path("COLON", "foo")
renv.prepend_path("COLON", "foo")
diff --git a/var/spack/repos/builtin.mock/packages/multi-provider-mpi/package.py b/var/spack/repos/builtin.mock/packages/multi-provider-mpi/package.py
index 93babae83d..69995b7520 100644
--- a/var/spack/repos/builtin.mock/packages/multi-provider-mpi/package.py
+++ b/var/spack/repos/builtin.mock/packages/multi-provider-mpi/package.py
@@ -27,6 +27,3 @@ class MultiProviderMpi(Package):
provides('mpi@3.0', when='@1.10.0')
provides('mpi@3.0', when='@1.8.8')
provides('mpi@2.2', when='@1.6.5')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/multimodule-inheritance/package.py b/var/spack/repos/builtin.mock/packages/multimodule-inheritance/package.py
index 8a3aeb0099..4343b03fe5 100644
--- a/var/spack/repos/builtin.mock/packages/multimodule-inheritance/package.py
+++ b/var/spack/repos/builtin.mock/packages/multimodule-inheritance/package.py
@@ -15,6 +15,3 @@ class MultimoduleInheritance(si.BaseWithDirectives):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('openblas', when='+openblas')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/multivalue_variant/package.py b/var/spack/repos/builtin.mock/packages/multivalue_variant/package.py
index 3e3732aa84..22d0ea1d97 100644
--- a/var/spack/repos/builtin.mock/packages/multivalue_variant/package.py
+++ b/var/spack/repos/builtin.mock/packages/multivalue_variant/package.py
@@ -33,6 +33,3 @@ class MultivalueVariant(Package):
depends_on('callpath')
depends_on('a')
depends_on('a@1.0', when='fee=barbaz')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/netlib-blas/package.py b/var/spack/repos/builtin.mock/packages/netlib-blas/package.py
index c4e8231d8f..4a759b6145 100644
--- a/var/spack/repos/builtin.mock/packages/netlib-blas/package.py
+++ b/var/spack/repos/builtin.mock/packages/netlib-blas/package.py
@@ -13,6 +13,3 @@ class NetlibBlas(Package):
version('3.5.0', 'b1d3e3e425b2e44a06760ff173104bdf')
provides('blas')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/netlib-lapack/package.py b/var/spack/repos/builtin.mock/packages/netlib-lapack/package.py
index d1a461d32c..b9c69bf66c 100644
--- a/var/spack/repos/builtin.mock/packages/netlib-lapack/package.py
+++ b/var/spack/repos/builtin.mock/packages/netlib-lapack/package.py
@@ -14,6 +14,3 @@ class NetlibLapack(Package):
provides('lapack')
depends_on('blas')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/nosource-install/package.py b/var/spack/repos/builtin.mock/packages/nosource-install/package.py
index 74e95e75b0..dc4e2755db 100644
--- a/var/spack/repos/builtin.mock/packages/nosource-install/package.py
+++ b/var/spack/repos/builtin.mock/packages/nosource-install/package.py
@@ -3,10 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
-
-import os
from spack import *
-from llnl.util.filesystem import touch
class NosourceInstall(BundlePackage):
@@ -24,8 +21,8 @@ class NosourceInstall(BundlePackage):
# The install method must also be present.
def install(self, spec, prefix):
- touch(os.path.join(self.prefix, 'install.txt'))
+ touch(join_path(self.prefix, 'install.txt'))
@run_after('install')
def post_install(self):
- touch(os.path.join(self.prefix, 'post-install.txt'))
+ touch(join_path(self.prefix, 'post-install.txt'))
diff --git a/var/spack/repos/builtin.mock/packages/openblas-with-lapack/package.py b/var/spack/repos/builtin.mock/packages/openblas-with-lapack/package.py
index 7bfea42eff..997049af56 100644
--- a/var/spack/repos/builtin.mock/packages/openblas-with-lapack/package.py
+++ b/var/spack/repos/builtin.mock/packages/openblas-with-lapack/package.py
@@ -15,6 +15,3 @@ class OpenblasWithLapack(Package):
provides('lapack')
provides('blas')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/openblas/package.py b/var/spack/repos/builtin.mock/packages/openblas/package.py
index 6947e3156f..ff4bda9a27 100644
--- a/var/spack/repos/builtin.mock/packages/openblas/package.py
+++ b/var/spack/repos/builtin.mock/packages/openblas/package.py
@@ -14,6 +14,3 @@ class Openblas(Package):
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9')
provides('blas')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/optional-dep-test-2/package.py b/var/spack/repos/builtin.mock/packages/optional-dep-test-2/package.py
index 585840a5b3..64437ae337 100644
--- a/var/spack/repos/builtin.mock/packages/optional-dep-test-2/package.py
+++ b/var/spack/repos/builtin.mock/packages/optional-dep-test-2/package.py
@@ -19,6 +19,3 @@ class OptionalDepTest2(Package):
depends_on('optional-dep-test', when='+odt')
depends_on('optional-dep-test+mpi', when='+mpi')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/optional-dep-test-3/package.py b/var/spack/repos/builtin.mock/packages/optional-dep-test-3/package.py
index 3529e9df5f..1261a117a9 100644
--- a/var/spack/repos/builtin.mock/packages/optional-dep-test-3/package.py
+++ b/var/spack/repos/builtin.mock/packages/optional-dep-test-3/package.py
@@ -18,6 +18,3 @@ class OptionalDepTest3(Package):
depends_on('a', when='~var')
depends_on('b', when='+var')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/optional-dep-test/package.py b/var/spack/repos/builtin.mock/packages/optional-dep-test/package.py
index 363512c072..9a39e1a624 100644
--- a/var/spack/repos/builtin.mock/packages/optional-dep-test/package.py
+++ b/var/spack/repos/builtin.mock/packages/optional-dep-test/package.py
@@ -30,6 +30,3 @@ class OptionalDepTest(Package):
depends_on('mpi', when='^g')
depends_on('mpi', when='+mpi')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/othervirtual/package.py b/var/spack/repos/builtin.mock/packages/othervirtual/package.py
index d4cd1f4a2c..c2c091af5c 100644
--- a/var/spack/repos/builtin.mock/packages/othervirtual/package.py
+++ b/var/spack/repos/builtin.mock/packages/othervirtual/package.py
@@ -13,6 +13,3 @@ class Othervirtual(Package):
version('1.0', '67890abcdef1234567890abcdef12345')
provides('stuff')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/override-context-templates/package.py b/var/spack/repos/builtin.mock/packages/override-context-templates/package.py
index d67eabdeb8..08f4d08828 100644
--- a/var/spack/repos/builtin.mock/packages/override-context-templates/package.py
+++ b/var/spack/repos/builtin.mock/packages/override-context-templates/package.py
@@ -18,6 +18,3 @@ class OverrideContextTemplates(Package):
tcl_template = 'extension.tcl'
tcl_context = {'sentence': "sentence from package"}
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/override-module-templates/package.py b/var/spack/repos/builtin.mock/packages/override-module-templates/package.py
index 90061399e8..2ed7d0e1e0 100644
--- a/var/spack/repos/builtin.mock/packages/override-module-templates/package.py
+++ b/var/spack/repos/builtin.mock/packages/override-module-templates/package.py
@@ -14,6 +14,3 @@ class OverrideModuleTemplates(Package):
tcl_template = 'override.txt'
lmod_template = 'override.txt'
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/patch-a-dependency/package.py b/var/spack/repos/builtin.mock/packages/patch-a-dependency/package.py
index aff2bf9722..cd5c54ad5f 100644
--- a/var/spack/repos/builtin.mock/packages/patch-a-dependency/package.py
+++ b/var/spack/repos/builtin.mock/packages/patch-a-dependency/package.py
@@ -15,6 +15,3 @@ class PatchADependency(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('libelf', patches=patch('libelf.patch'))
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/patch-several-dependencies/package.py b/var/spack/repos/builtin.mock/packages/patch-several-dependencies/package.py
index 9d190c7658..3cda48479e 100644
--- a/var/spack/repos/builtin.mock/packages/patch-several-dependencies/package.py
+++ b/var/spack/repos/builtin.mock/packages/patch-several-dependencies/package.py
@@ -36,6 +36,3 @@ class PatchSeveralDependencies(Package):
archive_sha256='abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcd',
sha256='1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd')
])
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/patch/package.py b/var/spack/repos/builtin.mock/packages/patch/package.py
index 8219a4fbb6..d0283cc33c 100644
--- a/var/spack/repos/builtin.mock/packages/patch/package.py
+++ b/var/spack/repos/builtin.mock/packages/patch/package.py
@@ -21,6 +21,3 @@ class Patch(Package):
patch('bar.patch', when='@2:')
patch('baz.patch')
patch('biz.patch', when='@1.0.1:1.0.2')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/perl/package.py b/var/spack/repos/builtin.mock/packages/perl/package.py
index b9f75270bf..2f495c1a97 100644
--- a/var/spack/repos/builtin.mock/packages/perl/package.py
+++ b/var/spack/repos/builtin.mock/packages/perl/package.py
@@ -13,6 +13,3 @@ class Perl(Package):
extendable = True
version('0.0.0', 'hash')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/preferred-test/package.py b/var/spack/repos/builtin.mock/packages/preferred-test/package.py
index 09e5aa034a..55121c76a6 100644
--- a/var/spack/repos/builtin.mock/packages/preferred-test/package.py
+++ b/var/spack/repos/builtin.mock/packages/preferred-test/package.py
@@ -15,6 +15,3 @@ class PreferredTest(Package):
version('0.2.16', 'b1190f3d3471685f17cfd1ec1d252ac9')
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9', preferred=True)
version('0.2.14', 'b1190f3d3471685f17cfd1ec1d252ac9')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/python/package.py b/var/spack/repos/builtin.mock/packages/python/package.py
index b15a9fd38a..e846572121 100644
--- a/var/spack/repos/builtin.mock/packages/python/package.py
+++ b/var/spack/repos/builtin.mock/packages/python/package.py
@@ -19,6 +19,3 @@ class Python(Package):
version('2.7.10', 'd7547558fd673bd9d38e2108c6b42521')
version('2.7.9', '5eebcaa0030dc4061156d3429657fb83')
version('2.7.8', 'd4bca0159acb0b44a781292b5231936f')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/simple-inheritance/package.py b/var/spack/repos/builtin.mock/packages/simple-inheritance/package.py
index a6f29f5ed5..8d65570899 100644
--- a/var/spack/repos/builtin.mock/packages/simple-inheritance/package.py
+++ b/var/spack/repos/builtin.mock/packages/simple-inheritance/package.py
@@ -30,6 +30,3 @@ class SimpleInheritance(BaseWithDirectives):
depends_on('openblas', when='+openblas')
provides('lapack', when='+openblas')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/singlevalue-variant-dependent/package.py b/var/spack/repos/builtin.mock/packages/singlevalue-variant-dependent/package.py
index 640594f0ed..5507fbdc21 100644
--- a/var/spack/repos/builtin.mock/packages/singlevalue-variant-dependent/package.py
+++ b/var/spack/repos/builtin.mock/packages/singlevalue-variant-dependent/package.py
@@ -15,6 +15,3 @@ class SinglevalueVariantDependent(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('multivalue_variant fee=baz')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/svn-test/package.py b/var/spack/repos/builtin.mock/packages/svn-test/package.py
index c982bb0649..dec6d4a413 100644
--- a/var/spack/repos/builtin.mock/packages/svn-test/package.py
+++ b/var/spack/repos/builtin.mock/packages/svn-test/package.py
@@ -11,6 +11,3 @@ class SvnTest(Package):
url = "http://www.example.com/svn-test-1.0.tar.gz"
version('svn', svn='to-be-filled-in-by-test')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/svn-top-level/package.py b/var/spack/repos/builtin.mock/packages/svn-top-level/package.py
index 83588e97b3..0da7f9656d 100644
--- a/var/spack/repos/builtin.mock/packages/svn-top-level/package.py
+++ b/var/spack/repos/builtin.mock/packages/svn-top-level/package.py
@@ -11,6 +11,3 @@ class SvnTopLevel(Package):
svn = 'https://example.com/some/svn/repo'
version('1.0')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/url-list-test/package.py b/var/spack/repos/builtin.mock/packages/url-list-test/package.py
index 4a9bd1fc56..02e1f4747f 100644
--- a/var/spack/repos/builtin.mock/packages/url-list-test/package.py
+++ b/var/spack/repos/builtin.mock/packages/url-list-test/package.py
@@ -5,7 +5,6 @@
from spack import *
-import os
import spack.paths
@@ -13,7 +12,7 @@ class UrlListTest(Package):
"""Mock package with url_list."""
homepage = "http://www.url-list-example.com"
- web_data_path = os.path.join(spack.paths.test_path, 'data', 'web')
+ web_data_path = join_path(spack.paths.test_path, 'data', 'web')
url = 'file://' + web_data_path + '/foo-0.0.0.tar.gz'
list_url = 'file://' + web_data_path + '/index.html'
list_depth = 3
@@ -25,6 +24,3 @@ class UrlListTest(Package):
version('2.0.0b2', 'abc200b2')
version('3.0a1', 'abc30a1')
version('4.5-rc5', 'abc45rc5')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/url-test/package.py b/var/spack/repos/builtin.mock/packages/url-test/package.py
index 6a7d2b9416..1deee5515d 100644
--- a/var/spack/repos/builtin.mock/packages/url-test/package.py
+++ b/var/spack/repos/builtin.mock/packages/url-test/package.py
@@ -11,6 +11,3 @@ class UrlTest(Package):
homepage = "http://www.url-fetch-example.com"
version('test', url='to-be-filled-in-by-test')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/when-directives-false/package.py b/var/spack/repos/builtin.mock/packages/when-directives-false/package.py
index 105caeccdf..f701e753de 100644
--- a/var/spack/repos/builtin.mock/packages/when-directives-false/package.py
+++ b/var/spack/repos/builtin.mock/packages/when-directives-false/package.py
@@ -23,6 +23,3 @@ class WhenDirectivesFalse(Package):
resource(url="http://www.example.com/example-1.0-resource.tar.gz",
md5='0123456789abcdef0123456789abcdef',
when=False)
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/when-directives-true/package.py b/var/spack/repos/builtin.mock/packages/when-directives-true/package.py
index 50dd5a00f4..6f8f7ed7c1 100644
--- a/var/spack/repos/builtin.mock/packages/when-directives-true/package.py
+++ b/var/spack/repos/builtin.mock/packages/when-directives-true/package.py
@@ -23,6 +23,3 @@ class WhenDirectivesTrue(Package):
resource(url="http://www.example.com/example-1.0-resource.tar.gz",
md5='0123456789abcdef0123456789abcdef',
when=True)
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin.mock/packages/zmpi/package.py b/var/spack/repos/builtin.mock/packages/zmpi/package.py
index f0060b319e..c9caa9a20f 100644
--- a/var/spack/repos/builtin.mock/packages/zmpi/package.py
+++ b/var/spack/repos/builtin.mock/packages/zmpi/package.py
@@ -16,6 +16,3 @@ class Zmpi(Package):
provides('mpi@:10.0')
depends_on('fake')
-
- def install(self, spec, prefix):
- pass
diff --git a/var/spack/repos/builtin/packages/apple-libunwind/package.py b/var/spack/repos/builtin/packages/apple-libunwind/package.py
index 0b7d8cc1cb..9d1db5cca8 100644
--- a/var/spack/repos/builtin/packages/apple-libunwind/package.py
+++ b/var/spack/repos/builtin/packages/apple-libunwind/package.py
@@ -43,7 +43,8 @@ class AppleLibunwind(Package):
raise InstallError(msg)
def install(self, spec, prefix):
- pass
+ # sanity_check_prefix requires something in the install directory
+ mkdirp(prefix.lib)
@property
def libs(self):