diff options
author | Nathan Hanford <nhanford@llnl.gov> | 2024-10-10 15:48:58 -0700 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-10-10 15:48:58 -0700 |
commit | af62a062cc18626ccf039ee5b4bef5a7d04cc118 (patch) | |
tree | 9750eab2a94f1a9c72a2f83353a64a6ffd74e531 /lib | |
parent | e6114f544d484762160f463625b526d2dc53cab8 (diff) | |
download | spack-af62a062cc18626ccf039ee5b4bef5a7d04cc118.tar.gz spack-af62a062cc18626ccf039ee5b4bef5a7d04cc118.tar.bz2 spack-af62a062cc18626ccf039ee5b4bef5a7d04cc118.tar.xz spack-af62a062cc18626ccf039ee5b4bef5a7d04cc118.zip |
Installer: rewire spliced specs via RewireTask (#39136)
This PR allows users to configure explicit splicing replacement of an abstract spec in the concretizer.
concretizer:
splice:
explicit:
- target: mpi
replacement: mpich/abcdef
transitive: true
This config block would mean "for any spec that concretizes to use mpi, splice in mpich/abcdef in place of the mpi it would naturally concretize to use. See #20262, #26873, #27919, and #46382 for PRs enabling splicing in the Spec object. This PR will be the first place the splice method is used in a user-facing manner. See https://spack.readthedocs.io/en/latest/spack.html#spack.spec.Spec.splice for more information on splicing.
This will allow users to reuse generic public binaries while splicing in the performant local mpi implementation on their system.
In the config file, the target may be any abstract spec. The replacement must be a spec that includes an abstract hash `/abcdef`. The transitive key is optional, defaulting to true if left out.
Two important items to note:
1. When writing explicit splice config, the user is in charge of ensuring that the replacement specs they use are binary compatible with whatever targets they replace. In practice, this will likely require either specific knowledge of what packages will be installed by the user's workflow, or somewhat more specific abstract "target" specs for splicing, to ensure binary compatibility.
2. Explicit splices can cause the output of the concretizer not to satisfy the input. For example, using the config above and consider a package in a binary cache `hdf5/xyzabc` that depends on mvapich2. Then the command `spack install hdf5/xyzabc` will instead install the result of splicing `mpich/abcdef` into `hdf5/xyzabc` in place of whatever mvapich2 spec it previously depended on. When this occurs, a warning message is printed `Warning: explicit splice configuration has caused the the concretized spec {concrete_spec} not to satisfy the input spec {input_spec}".
Highlighted technical details of implementation:
1. This PR required modifying the installer to have two separate types of Tasks, `RewireTask` and `BuildTask`. Spliced specs are queued as `RewireTask` and standard specs are queued as `BuildTask`. Each spliced spec retains a pointer to its build_spec for provenance. If a RewireTask is dequeued and the associated `build_spec` is neither available in the install_tree nor from a binary cache, the RewireTask is requeued with a new dependency on a BuildTask for the build_spec, and BuildTasks are queued for the build spec and its dependencies.
2. Relocation is modified so that a spack binary can be simultaneously installed and rewired. This ensures that installing the build_spec is not necessary when splicing from a binary cache.
3. The splicing model is modified to more accurately represent build dependencies -- that is, spliced specs do not have build dependencies, as spliced specs are never built. Their build_specs retain the build dependencies, as they may be built as part of installing the spliced spec.
4. There were vestiges of the compiler bootstrapping logic that were not removed in #46237 because I asked alalazo to leave them in to avoid making the rebase for this PR harder than it needed to be. Those last remains are removed in this PR.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Diffstat (limited to 'lib')
-rw-r--r-- | lib/spack/docs/build_settings.rst | 71 | ||||
-rw-r--r-- | lib/spack/docs/images/splices.png | bin | 0 -> 366972 bytes | |||
-rw-r--r-- | lib/spack/spack/binary_distribution.py | 75 | ||||
-rw-r--r-- | lib/spack/spack/environment/environment.py | 13 | ||||
-rw-r--r-- | lib/spack/spack/installer.py | 443 | ||||
-rw-r--r-- | lib/spack/spack/rewiring.py | 19 | ||||
-rw-r--r-- | lib/spack/spack/schema/concretizer.py | 20 | ||||
-rw-r--r-- | lib/spack/spack/solver/asp.py | 35 | ||||
-rw-r--r-- | lib/spack/spack/spec.py | 117 | ||||
-rw-r--r-- | lib/spack/spack/test/bindist.py | 49 | ||||
-rw-r--r-- | lib/spack/spack/test/buildtask.py | 4 | ||||
-rw-r--r-- | lib/spack/spack/test/cmd/env.py | 33 | ||||
-rw-r--r-- | lib/spack/spack/test/concretize.py | 24 | ||||
-rw-r--r-- | lib/spack/spack/test/conftest.py | 35 | ||||
-rw-r--r-- | lib/spack/spack/test/installer.py | 201 | ||||
-rw-r--r-- | lib/spack/spack/test/rewiring.py | 45 | ||||
-rw-r--r-- | lib/spack/spack/test/spec_semantics.py | 59 |
17 files changed, 872 insertions, 371 deletions
diff --git a/lib/spack/docs/build_settings.rst b/lib/spack/docs/build_settings.rst index d545c70d18..97c81bf17a 100644 --- a/lib/spack/docs/build_settings.rst +++ b/lib/spack/docs/build_settings.rst @@ -166,3 +166,74 @@ while `py-numpy` still needs an older version: Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the default behavior is ``duplicates:strategy:minimal``. + +-------- +Splicing +-------- + +The ``splice`` key covers config attributes for splicing specs in the solver. + +"Splicing" is a method for replacing a dependency with another spec +that provides the same package or virtual. There are two types of +splices, referring to different behaviors for shared dependencies +between the root spec and the new spec replacing a dependency: +"transitive" and "intransitive". A "transitive" splice is one that +resolves all conflicts by taking the dependency from the new node. An +"intransitive" splice is one that resolves all conflicts by taking the +dependency from the original root. From a theory perspective, hybrid +splices are possible but are not modeled by Spack. + +All spliced specs retain a ``build_spec`` attribute that points to the +original Spec before any splice occurred. The ``build_spec`` for a +non-spliced spec is itself. + +The figure below shows examples of transitive and intransitive splices: + +.. figure:: images/splices.png + :align: center + +The concretizer can be configured to explicitly splice particular +replacements for a target spec. Splicing will allow the user to make +use of generically built public binary caches, while swapping in +highly optimized local builds for performance critical components +and/or components that interact closely with the specific hardware +details of the system. The most prominent candidate for splicing is +MPI providers. MPI packages have relatively well-understood ABI +characteristics, and most High Performance Computing facilities deploy +highly optimized MPI packages tailored to their particular +hardware. The following config block configures Spack to replace +whatever MPI provider each spec was concretized to use with the +particular package of ``mpich`` with the hash that begins ``abcdef``. + +.. code-block:: yaml + + concretizer: + splice: + explicit: + - target: mpi + replacement: mpich/abcdef + transitive: false + +.. warning:: + + When configuring an explicit splice, you as the user take on the + responsibility for ensuring ABI compatibility between the specs + matched by the target and the replacement you provide. If they are + not compatible, Spack will not warn you and your application will + fail to run. + +The ``target`` field of an explicit splice can be any abstract +spec. The ``replacement`` field must be a spec that includes the hash +of a concrete spec, and the replacement must either be the same +package as the target, provide the virtual that is the target, or +provide a virtual that the target provides. The ``transitive`` field +is optional -- by default, splices will be transitive. + +.. note:: + + With explicit splices configured, it is possible for Spack to + concretize to a spec that does not satisfy the input. For example, + with the config above ``hdf5 ^mvapich2`` will concretize to user + ``mpich/abcdef`` instead of ``mvapich2`` as the MPI provider. Spack + will warn the user in this case, but will not fail the + concretization. diff --git a/lib/spack/docs/images/splices.png b/lib/spack/docs/images/splices.png Binary files differnew file mode 100644 index 0000000000..b2a3e99837 --- /dev/null +++ b/lib/spack/docs/images/splices.png diff --git a/lib/spack/spack/binary_distribution.py b/lib/spack/spack/binary_distribution.py index baa04f9df6..1f0d33f00e 100644 --- a/lib/spack/spack/binary_distribution.py +++ b/lib/spack/spack/binary_distribution.py @@ -35,6 +35,7 @@ from llnl.util.symlink import readlink import spack.caches import spack.config as config import spack.database as spack_db +import spack.deptypes as dt import spack.error import spack.hash_types as ht import spack.hooks @@ -712,15 +713,32 @@ def get_buildfile_manifest(spec): return data -def hashes_to_prefixes(spec): - """Return a dictionary of hashes to prefixes for a spec and its deps, excluding externals""" - return { - s.dag_hash(): str(s.prefix) +def deps_to_relocate(spec): + """Return the transitive link and direct run dependencies of the spec. + + This is a special traversal for dependencies we need to consider when relocating a package. + + Package binaries, scripts, and other files may refer to the prefixes of dependencies, so + we need to rewrite those locations when dependencies are in a different place at install time + than they were at build time. + + This traversal covers transitive link dependencies and direct run dependencies because: + + 1. Spack adds RPATHs for transitive link dependencies so that packages can find needed + dependency libraries. + 2. Packages may call any of their *direct* run dependencies (and may bake their paths into + binaries or scripts), so we also need to search for run dependency prefixes when relocating. + + This returns a deduplicated list of transitive link dependencies and direct run dependencies. + """ + deps = [ + s for s in itertools.chain( spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run") ) if not s.external - } + ] + return llnl.util.lang.dedupe(deps, key=lambda s: s.dag_hash()) def get_buildinfo_dict(spec): @@ -736,7 +754,7 @@ def get_buildinfo_dict(spec): "relocate_binaries": manifest["binary_to_relocate"], "relocate_links": manifest["link_to_relocate"], "hardlinks_deduped": manifest["hardlinks_deduped"], - "hash_to_prefix": hashes_to_prefixes(spec), + "hash_to_prefix": {d.dag_hash(): str(d.prefix) for d in deps_to_relocate(spec)}, } @@ -1631,7 +1649,6 @@ def _oci_push( Dict[str, spack.oci.oci.Blob], List[Tuple[Spec, BaseException]], ]: - # Spec dag hash -> blob checksums: Dict[str, spack.oci.oci.Blob] = {} @@ -2201,11 +2218,36 @@ def relocate_package(spec): # First match specific prefix paths. Possibly the *local* install prefix # of some dependency is in an upstream, so we cannot assume the original # spack store root can be mapped uniformly to the new spack store root. - for dag_hash, new_dep_prefix in hashes_to_prefixes(spec).items(): - if dag_hash in hash_to_old_prefix: - old_dep_prefix = hash_to_old_prefix[dag_hash] - prefix_to_prefix_bin[old_dep_prefix] = new_dep_prefix - prefix_to_prefix_text[old_dep_prefix] = new_dep_prefix + # + # If the spec is spliced, we need to handle the simultaneous mapping + # from the old install_tree to the new install_tree and from the build_spec + # to the spliced spec. + # Because foo.build_spec is foo for any non-spliced spec, we can simplify + # by checking for spliced-in nodes by checking for nodes not in the build_spec + # without any explicit check for whether the spec is spliced. + # An analog in this algorithm is any spec that shares a name or provides the same virtuals + # in the context of the relevant root spec. This ensures that the analog for a spec s + # is the spec that s replaced when we spliced. + relocation_specs = deps_to_relocate(spec) + build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD)) + for s in relocation_specs: + analog = s + if id(s) not in build_spec_ids: + analogs = [ + d + for d in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD) + if s._splice_match(d, self_root=spec, other_root=spec.build_spec) + ] + if analogs: + # Prefer same-name analogs and prefer higher versions + # This matches the preferences in Spec.splice, so we will find same node + analog = max(analogs, key=lambda a: (a.name == s.name, a.version)) + + lookup_dag_hash = analog.dag_hash() + if lookup_dag_hash in hash_to_old_prefix: + old_dep_prefix = hash_to_old_prefix[lookup_dag_hash] + prefix_to_prefix_bin[old_dep_prefix] = str(s.prefix) + prefix_to_prefix_text[old_dep_prefix] = str(s.prefix) # Only then add the generic fallback of install prefix -> install prefix. prefix_to_prefix_text[old_prefix] = new_prefix @@ -2543,10 +2585,10 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None): warnings.warn("Package for spec {0} already installed.".format(spec.format())) return - download_result = download_tarball(spec, unsigned) + download_result = download_tarball(spec.build_spec, unsigned) if not download_result: msg = 'download of binary cache file for spec "{0}" failed' - raise RuntimeError(msg.format(spec.format())) + raise RuntimeError(msg.format(spec.build_spec.format())) if sha256: checker = spack.util.crypto.Checker(sha256) @@ -2565,6 +2607,11 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None): with spack.util.path.filter_padding(): tty.msg('Installing "{0}" from a buildcache'.format(spec.format())) extract_tarball(spec, download_result, force) + spec.package.windows_establish_runtime_linkage() + if spec.spliced: # overwrite old metadata with new + spack.store.STORE.layout.write_spec( + spec, spack.store.STORE.layout.spec_file_path(spec) + ) spack.hooks.post_install(spec, False) spack.store.STORE.db.add(spec) diff --git a/lib/spack/spack/environment/environment.py b/lib/spack/spack/environment/environment.py index 4f5cd14df8..8837e2cecd 100644 --- a/lib/spack/spack/environment/environment.py +++ b/lib/spack/spack/environment/environment.py @@ -2165,6 +2165,13 @@ class Environment: # Assumes no legacy formats, since this was just created. spec_dict[ht.dag_hash.name] = s.dag_hash() concrete_specs[s.dag_hash()] = spec_dict + + if s.build_spec is not s: + for d in s.build_spec.traverse(): + build_spec_dict = d.node_dict_with_hashes(hash=ht.dag_hash) + build_spec_dict[ht.dag_hash.name] = d.dag_hash() + concrete_specs[d.dag_hash()] = build_spec_dict + return concrete_specs def _concrete_roots_dict(self): @@ -2324,7 +2331,7 @@ class Environment: specs_by_hash[lockfile_key] = spec # Second pass: For each spec, get its dependencies from the node dict - # and add them to the spec + # and add them to the spec, including build specs for lockfile_key, node_dict in json_specs_by_hash.items(): name, data = reader.name_and_data(node_dict) for _, dep_hash, deptypes, _, virtuals in reader.dependencies_from_node_dict(data): @@ -2332,6 +2339,10 @@ class Environment: specs_by_hash[dep_hash], depflag=dt.canonicalize(deptypes), virtuals=virtuals ) + if "build_spec" in node_dict: + _, bhash, _ = reader.extract_build_spec_info_from_node_dict(node_dict) + specs_by_hash[lockfile_key]._build_spec = specs_by_hash[bhash] + # Traverse the root specs one at a time in the order they appear. # The first time we see each DAG hash, that's the one we want to # keep. This is only required as long as we support older lockfile diff --git a/lib/spack/spack/installer.py b/lib/spack/spack/installer.py index e4a5c33dd3..f1b33f1660 100644 --- a/lib/spack/spack/installer.py +++ b/lib/spack/spack/installer.py @@ -2,8 +2,7 @@ # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) -""" -This module encapsulates package installation functionality. +"""This module encapsulates package installation functionality. The PackageInstaller coordinates concurrent builds of packages for the same Spack instance by leveraging the dependency DAG and file system locks. It @@ -17,13 +16,14 @@ of separate packages associated with a spec. File system locks enable coordination such that no two processes attempt to build the same or a failed dependency package. -Failures to install dependency packages result in removal of their dependents' -build tasks from the current process. A failure file is also written (and -locked) so that other processes can detect the failure and adjust their build -tasks accordingly. +If a dependency package fails to install, its dependents' tasks will be +removed from the installing process's queue. A failure file is also written +and locked. Other processes use this file to detect the failure and dequeue +its dependents. This module supports the coordination of local and distributed concurrent installations of packages in a Spack instance. + """ import copy @@ -59,6 +59,7 @@ import spack.mirror import spack.package_base import spack.package_prefs as prefs import spack.repo +import spack.rewiring import spack.spec import spack.store import spack.util.executable @@ -110,13 +111,22 @@ def _write_timer_json(pkg, timer, cache): return -class InstallAction: +class ExecuteResult(enum.Enum): + # Task succeeded + SUCCESS = enum.auto() + # Task failed + FAILED = enum.auto() + # Task is missing build spec and will be requeued + MISSING_BUILD_SPEC = enum.auto() + + +class InstallAction(enum.Enum): #: Don't perform an install - NONE = 0 + NONE = enum.auto() #: Do a standard install - INSTALL = 1 + INSTALL = enum.auto() #: Do an overwrite install - OVERWRITE = 2 + OVERWRITE = enum.auto() class InstallStatus: @@ -440,7 +450,7 @@ def _process_binary_cache_tarball( """ with timer.measure("fetch"): download_result = binary_distribution.download_tarball( - pkg.spec, unsigned, mirrors_for_spec + pkg.spec.build_spec, unsigned, mirrors_for_spec ) if download_result is None: @@ -451,6 +461,11 @@ def _process_binary_cache_tarball( with timer.measure("install"), spack.util.path.filter_padding(): binary_distribution.extract_tarball(pkg.spec, download_result, force=False, timer=timer) + if pkg.spec.spliced: # overwrite old metadata with new + spack.store.STORE.layout.write_spec( + pkg.spec, spack.store.STORE.layout.spec_file_path(pkg.spec) + ) + if hasattr(pkg, "_post_buildcache_install_hook"): pkg._post_buildcache_install_hook() @@ -686,7 +701,7 @@ def log(pkg: "spack.package_base.PackageBase") -> None: def package_id(spec: "spack.spec.Spec") -> str: """A "unique" package identifier for installation purposes - The identifier is used to track build tasks, locks, install, and + The identifier is used to track tasks, locks, install, and failure statuses. The identifier needs to distinguish between combinations of compilers @@ -745,14 +760,14 @@ class BuildRequest: ) def __repr__(self) -> str: - """Returns a formal representation of the build request.""" + """Return a formal representation of the build request.""" rep = f"{self.__class__.__name__}(" for attr, value in self.__dict__.items(): rep += f"{attr}={value.__repr__()}, " return f"{rep.strip(', ')})" def __str__(self) -> str: - """Returns a printable version of the build request.""" + """Return a printable version of the build request.""" return f"package={self.pkg.name}, install_args={self.install_args}" def _add_default_args(self) -> None: @@ -849,8 +864,8 @@ class BuildRequest: yield dep -class BuildTask: - """Class for representing the build task for a package.""" +class Task: + """Base class for representing a task for a package.""" def __init__( self, @@ -864,13 +879,11 @@ class BuildTask: installed: Set[str] = set(), ): """ - Instantiate a build task for a package. + Instantiate a task for a package. Args: - pkg: the package to be built and installed and whose spec is - concrete + pkg: the package to be built and installed request: the associated install request - compiler: whether task is for a bootstrap compiler start: the initial start time for the package, in seconds attempts: the number of attempts to install the package, which should be 0 when the task is initially instantiated @@ -909,12 +922,12 @@ class BuildTask: # The initial build task cannot have status "removed". if attempts == 0 and status == BuildStatus.REMOVED: raise spack.error.InstallError( - f"Cannot create a build task for {self.pkg_id} with status '{status}'", pkg=pkg + f"Cannot create a task for {self.pkg_id} with status '{status}'", pkg=pkg ) self.status = status - # Package is associated with a bootstrap compiler - self.compiler = compiler + # cache the PID, which is used for distributed build messages in self.execute + self.pid = os.getpid() # The initial start time for processing the spec self.start = start @@ -945,7 +958,7 @@ class BuildTask: ) # List of uninstalled dependencies, which is used to establish - # the priority of the build task. + # the priority of the task. self.uninstalled_deps = set( pkg_id for pkg_id in self.dependencies if pkg_id not in installed ) @@ -954,6 +967,13 @@ class BuildTask: self.attempts = attempts self._update() + def execute(self, install_status: InstallStatus) -> ExecuteResult: + """Execute the work of this task. + + The ``install_status`` is an ``InstallStatus`` object used to format progress reporting for + this task in the context of the full ``BuildRequest``.""" + raise NotImplementedError + def __eq__(self, other): return self.key == other.key @@ -973,14 +993,14 @@ class BuildTask: return self.key != other.key def __repr__(self) -> str: - """Returns a formal representation of the build task.""" + """Returns a formal representation of the task.""" rep = f"{self.__class__.__name__}(" for attr, value in self.__dict__.items(): rep += f"{attr}={value.__repr__()}, " return f"{rep.strip(', ')})" def __str__(self) -> str: - """Returns a printable version of the build task.""" + """Returns a printable version of the task.""" dependencies = f"#dependencies={len(self.dependencies)}" return "priority={0}, status={1}, start={2}, {3}".format( self.priority, self.status, self.start, dependencies @@ -997,8 +1017,7 @@ class BuildTask: def add_dependent(self, pkg_id: str) -> None: """ - Ensure the dependent package id is in the task's list so it will be - properly updated when this package is installed. + Ensure the package is in this task's ``dependents`` list. Args: pkg_id: package identifier of the dependent package @@ -1007,6 +1026,20 @@ class BuildTask: tty.debug(f"Adding {pkg_id} as a dependent of {self.pkg_id}") self.dependents.add(pkg_id) + def add_dependency(self, pkg_id, installed=False): + """ + Ensure the package is in this task's ``dependencies`` list. + + Args: + pkg_id (str): package identifier of the dependency package + installed (bool): install status of the dependency package + """ + if pkg_id != self.pkg_id and pkg_id not in self.dependencies: + tty.debug(f"Adding {pkg_id} as a depencency of {self.pkg_id}") + self.dependencies.add(pkg_id) + if not installed: + self.uninstalled_deps.add(pkg_id) + def flag_installed(self, installed: List[str]) -> None: """ Ensure the dependency is not considered to still be uninstalled. @@ -1023,6 +1056,39 @@ class BuildTask: level=2, ) + def _setup_install_dir(self, pkg: "spack.package_base.PackageBase") -> None: + """ + Create and ensure proper access controls for the install directory. + Write a small metadata file with the current spack environment. + + Args: + pkg: the package to be built and installed + """ + # Move to a module level method. + if not os.path.exists(pkg.spec.prefix): + path = spack.util.path.debug_padded_filter(pkg.spec.prefix) + tty.debug(f"Creating the installation directory {path}") + spack.store.STORE.layout.create_install_directory(pkg.spec) + else: + # Set the proper group for the prefix + group = prefs.get_package_group(pkg.spec) + if group: + fs.chgrp(pkg.spec.prefix, group) + + # Set the proper permissions. + # This has to be done after group because changing groups blows + # away the sticky group bit on the directory + mode = os.stat(pkg.spec.prefix).st_mode + perms = prefs.get_package_dir_permissions(pkg.spec) + if mode != perms: + os.chmod(pkg.spec.prefix, perms) + + # Ensure the metadata path exists as well + fs.mkdirp(spack.store.STORE.layout.metadata_path(pkg.spec), mode=perms) + + # Always write host environment - we assume this can change + spack.store.STORE.layout.write_host_environment(pkg.spec) + @property def explicit(self) -> bool: return self.pkg.spec.dag_hash() in self.request.install_args.get("explicit", []) @@ -1053,7 +1119,7 @@ class BuildTask: """The key is the tuple (# uninstalled dependencies, sequence).""" return (self.priority, self.sequence) - def next_attempt(self, installed) -> "BuildTask": + def next_attempt(self, installed) -> "Task": """Create a new, updated task for the next installation attempt.""" task = copy.copy(self) task._update() @@ -1067,6 +1133,100 @@ class BuildTask: return len(self.uninstalled_deps) +class BuildTask(Task): + """Class for representing a build task for a package.""" + + def execute(self, install_status): + """ + Perform the installation of the requested spec and/or dependency + represented by the build task. + """ + install_args = self.request.install_args + tests = install_args.get("tests") + unsigned = install_args.get("unsigned") + + pkg, pkg_id = self.pkg, self.pkg_id + + tty.msg(install_msg(pkg_id, self.pid, install_status)) + self.start = self.start or time.time() + self.status = BuildStatus.INSTALLING + + # Use the binary cache if requested + if self.use_cache: + if _install_from_cache(pkg, self.explicit, unsigned): + return ExecuteResult.SUCCESS + elif self.cache_only: + raise spack.error.InstallError( + "No binary found when cache-only was specified", pkg=pkg + ) + else: + tty.msg(f"No binary for {pkg_id} found: installing from source") + + pkg.run_tests = tests is True or tests and pkg.name in tests + + # hook that allows tests to inspect the Package before installation + # see unit_test_check() docs. + if not pkg.unit_test_check(): + return ExecuteResult.FAILED + + try: + # Create stage object now and let it be serialized for the child process. That + # way monkeypatch in tests works correctly. + pkg.stage + + self._setup_install_dir(pkg) + + # Create a child process to do the actual installation. + # Preserve verbosity settings across installs. + spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process( + pkg, build_process, install_args + ) + + # Note: PARENT of the build process adds the new package to + # the database, so that we don't need to re-read from file. + spack.store.STORE.db.add(pkg.spec, explicit=self.explicit) + except spack.error.StopPhase as e: + # A StopPhase exception means that do_install was asked to + # stop early from clients, and is not an error at this point + pid = f"{self.pid}: " if tty.show_pid() else "" + tty.debug(f"{pid}{str(e)}") + tty.debug(f"Package stage directory: {pkg.stage.source_path}") + return ExecuteResult.SUCCESS + + +class RewireTask(Task): + """Class for representing a rewire task for a package.""" + + def execute(self, install_status): + """Execute rewire task + + Rewire tasks are executed by either rewiring self.package.spec.build_spec that is already + installed or downloading and rewiring a binary for the it. + + If not available installed or as binary, return ExecuteResult.MISSING_BUILD_SPEC. + This will prompt the Installer to requeue the task with a dependency on the BuildTask + to install self.pkg.spec.build_spec + """ + oldstatus = self.status + self.status = BuildStatus.INSTALLING + tty.msg(install_msg(self.pkg_id, self.pid, install_status)) + self.start = self.start or time.time() + if not self.pkg.spec.build_spec.installed: + try: + install_args = self.request.install_args + unsigned = install_args.get("unsigned") + _process_binary_cache_tarball(self.pkg, explicit=self.explicit, unsigned=unsigned) + _print_installed_pkg(self.pkg.prefix) + return ExecuteResult.SUCCESS + except BaseException as e: + tty.error(f"Failed to rewire {self.pkg.spec} from binary. {e}") + self.status = oldstatus + return ExecuteResult.MISSING_BUILD_SPEC + spack.rewiring.rewire_node(self.pkg.spec, self.explicit) + _print_installed_pkg(self.pkg.prefix) + return ExecuteResult.SUCCESS + + class PackageInstaller: """ Class for managing the install process for a Spack instance based on a bottom-up DAG approach. @@ -1160,11 +1320,11 @@ class PackageInstaller: # List of build requests self.build_requests = [BuildRequest(pkg, install_args) for pkg in packages] - # Priority queue of build tasks - self.build_pq: List[Tuple[Tuple[int, int], BuildTask]] = [] + # Priority queue of tasks + self.build_pq: List[Tuple[Tuple[int, int], Task]] = [] - # Mapping of unique package ids to build task - self.build_tasks: Dict[str, BuildTask] = {} + # Mapping of unique package ids to task + self.build_tasks: Dict[str, Task] = {} # Cache of package locks for failed packages, keyed on package's ids self.failed: Dict[str, Optional[lk.Lock]] = {} @@ -1185,6 +1345,9 @@ class PackageInstaller: # fast then that option applies to all build requests. self.fail_fast = False + # Initializing all_dependencies to empty. This will be set later in _init_queue. + self.all_dependencies: Dict[str, Set[str]] = {} + def __repr__(self) -> str: """Returns a formal representation of the package installer.""" rep = f"{self.__class__.__name__}(" @@ -1204,25 +1367,18 @@ class PackageInstaller: self, pkg: "spack.package_base.PackageBase", request: BuildRequest, - is_compiler: bool, all_deps: Dict[str, Set[str]], ) -> None: """ - Creates and queus the initial build task for the package. + Creates and queues the initial task for the package. Args: pkg: the package to be built and installed request: the associated install request - is_compiler: whether task is for a bootstrap compiler all_deps: dictionary of all dependencies and associated dependents """ - task = BuildTask( - pkg, - request=request, - compiler=is_compiler, - status=BuildStatus.QUEUED, - installed=self.installed, - ) + cls = RewireTask if pkg.spec.spliced else BuildTask + task = cls(pkg, request=request, status=BuildStatus.QUEUED, installed=self.installed) for dep_id in task.dependencies: all_deps[dep_id].add(package_id(pkg.spec)) @@ -1296,7 +1452,7 @@ class PackageInstaller: else: lock.release_read() - def _prepare_for_install(self, task: BuildTask) -> None: + def _prepare_for_install(self, task: Task) -> None: """ Check the database and leftover installation directories/files and prepare for a new install attempt for an uninstalled package. @@ -1304,7 +1460,7 @@ class PackageInstaller: and ensuring the database is up-to-date. Args: - task (BuildTask): the build task whose associated package is + task: the task whose associated package is being checked """ install_args = task.request.install_args @@ -1355,7 +1511,7 @@ class PackageInstaller: spack.store.STORE.db.update_explicit(task.pkg.spec, True) def _cleanup_all_tasks(self) -> None: - """Cleanup all build tasks to include releasing their locks.""" + """Cleanup all tasks to include releasing their locks.""" for pkg_id in self.locks: self._release_lock(pkg_id) @@ -1387,7 +1543,7 @@ class PackageInstaller: def _cleanup_task(self, pkg: "spack.package_base.PackageBase") -> None: """ - Cleanup the build task for the spec + Cleanup the task for the spec Args: pkg: the package being installed @@ -1459,7 +1615,7 @@ class PackageInstaller: if lock_type == "read": # Wait until the other process finishes if there are no more - # build tasks with priority 0 (i.e., with no uninstalled + # tasks with priority 0 (i.e., with no uninstalled # dependencies). no_p0 = len(self.build_tasks) == 0 or not self._next_is_pri0() timeout = None if no_p0 else 3.0 @@ -1511,6 +1667,33 @@ class PackageInstaller: self.locks[pkg_id] = (lock_type, lock) return self.locks[pkg_id] + def _requeue_with_build_spec_tasks(self, task): + """Requeue the task and its missing build spec dependencies""" + # Full install of the build_spec is necessary because it didn't already exist somewhere + spec = task.pkg.spec + for dep in spec.build_spec.traverse(): + dep_pkg = dep.package + + dep_id = package_id(dep) + if dep_id not in self.build_tasks: + self._add_init_task(dep_pkg, task.request, self.all_dependencies) + + # Clear any persistent failure markings _unless_ they are + # associated with another process in this parallel build + # of the spec. + spack.store.STORE.failure_tracker.clear(dep, force=False) + + # Queue the build spec. + build_pkg_id = package_id(spec.build_spec) + build_spec_task = self.build_tasks[build_pkg_id] + spec_pkg_id = package_id(spec) + spec_task = task.next_attempt(self.installed) + spec_task.status = BuildStatus.QUEUED + # Convey a build spec as a dependency of a deployed spec. + build_spec_task.add_dependent(spec_pkg_id) + spec_task.add_dependency(build_pkg_id) + self._push_task(spec_task) + def _add_tasks(self, request: BuildRequest, all_deps): """Add tasks to the priority queue for the given build request. @@ -1540,7 +1723,7 @@ class PackageInstaller: dep_id = package_id(dep) if dep_id not in self.build_tasks: - self._add_init_task(dep_pkg, request, is_compiler=False, all_deps=all_deps) + self._add_init_task(dep_pkg, request, all_deps=all_deps) # Clear any persistent failure markings _unless_ they are # associated with another process in this parallel build @@ -1558,80 +1741,29 @@ class PackageInstaller: self._check_deps_status(request) # Now add the package itself, if appropriate - self._add_init_task(request.pkg, request, is_compiler=False, all_deps=all_deps) + self._add_init_task(request.pkg, request, all_deps=all_deps) # Ensure if one request is to fail fast then all requests will. fail_fast = bool(request.install_args.get("fail_fast")) self.fail_fast = self.fail_fast or fail_fast - def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None: + def _install_task(self, task: Task, install_status: InstallStatus) -> None: """ Perform the installation of the requested spec and/or dependency - represented by the build task. + represented by the task. Args: - task: the installation build task for a package + task: the installation task for a package install_status: the installation status for the package""" - - explicit = task.explicit - install_args = task.request.install_args - cache_only = task.cache_only - use_cache = task.use_cache - tests = install_args.get("tests", False) - assert isinstance(tests, (bool, list)) # make mypy happy. - unsigned: Optional[bool] = install_args.get("unsigned") - - pkg, pkg_id = task.pkg, task.pkg_id - - tty.msg(install_msg(pkg_id, self.pid, install_status)) - task.start = task.start or time.time() - task.status = BuildStatus.INSTALLING - - # Use the binary cache if requested - if use_cache: - if _install_from_cache(pkg, explicit, unsigned): - self._update_installed(task) - return - elif cache_only: - raise spack.error.InstallError( - "No binary found when cache-only was specified", pkg=pkg - ) - else: - tty.msg(f"No binary for {pkg_id} found: installing from source") - - pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests - - # hook that allows tests to inspect the Package before installation - # see unit_test_check() docs. - if not pkg.unit_test_check(): - return - - try: - self._setup_install_dir(pkg) - - # Create stage object now and let it be serialized for the child process. That - # way monkeypatch in tests works correctly. - pkg.stage - - # Create a child process to do the actual installation. - # Preserve verbosity settings across installs. - spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process( - pkg, build_process, install_args - ) - # Note: PARENT of the build process adds the new package to - # the database, so that we don't need to re-read from file. - spack.store.STORE.db.add(pkg.spec, explicit=explicit) - - except spack.error.StopPhase as e: - # A StopPhase exception means that the installer was asked to stop early from clients, - # and is not an error at this point - pid = f"{self.pid}: " if tty.show_pid() else "" - tty.debug(f"{pid}{str(e)}") - tty.debug(f"Package stage directory: {pkg.stage.source_path}") + rc = task.execute(install_status) + if rc == ExecuteResult.MISSING_BUILD_SPEC: + self._requeue_with_build_spec_tasks(task) + else: # if rc == ExecuteResult.SUCCESS or rc == ExecuteResult.FAILED + self._update_installed(task) def _next_is_pri0(self) -> bool: """ - Determine if the next build task has priority 0 + Determine if the next task has priority 0 Return: True if it does, False otherwise @@ -1641,9 +1773,9 @@ class PackageInstaller: task = self.build_pq[0][1] return task.priority == 0 - def _pop_task(self) -> Optional[BuildTask]: + def _pop_task(self) -> Optional[Task]: """ - Remove and return the lowest priority build task. + Remove and return the lowest priority task. Source: Variant of function at docs.python.org/2/library/heapq.html """ @@ -1655,17 +1787,17 @@ class PackageInstaller: return task return None - def _push_task(self, task: BuildTask) -> None: + def _push_task(self, task: Task) -> None: """ - Push (or queue) the specified build task for the package. + Push (or queue) the specified task for the package. Source: Customization of "add_task" function at docs.python.org/2/library/heapq.html Args: - task: the installation build task for a package + task: the installation task for a package """ - msg = "{0} a build task for {1} with status '{2}'" + msg = "{0} a task for {1} with status '{2}'" skip = "Skipping requeue of task for {0}: {1}" # Ensure do not (re-)queue installed or failed packages whose status @@ -1678,7 +1810,7 @@ class PackageInstaller: tty.debug(skip.format(task.pkg_id, "failed")) return - # Remove any associated build task since its sequence will change + # Remove any associated task since its sequence will change self._remove_task(task.pkg_id) desc = ( "Queueing" if task.attempts == 1 else f"Requeueing ({ordinal(task.attempts)} attempt)" @@ -1713,9 +1845,9 @@ class PackageInstaller: except Exception as exc: tty.warn(err.format(exc.__class__.__name__, ltype, pkg_id, str(exc))) - def _remove_task(self, pkg_id: str) -> Optional[BuildTask]: + def _remove_task(self, pkg_id: str) -> Optional[Task]: """ - Mark the existing package build task as being removed and return it. + Mark the existing package task as being removed and return it. Raises KeyError if not found. Source: Variant of function at docs.python.org/2/library/heapq.html @@ -1724,19 +1856,19 @@ class PackageInstaller: pkg_id: identifier for the package to be removed """ if pkg_id in self.build_tasks: - tty.debug(f"Removing build task for {pkg_id} from list") + tty.debug(f"Removing task for {pkg_id} from list") task = self.build_tasks.pop(pkg_id) task.status = BuildStatus.REMOVED return task else: return None - def _requeue_task(self, task: BuildTask, install_status: InstallStatus) -> None: + def _requeue_task(self, task: Task, install_status: InstallStatus) -> None: """ Requeues a task that appears to be in progress by another process. Args: - task (BuildTask): the installation build task for a package + task (Task): the installation task for a package """ if task.status not in [BuildStatus.INSTALLED, BuildStatus.INSTALLING]: tty.debug( @@ -1748,47 +1880,15 @@ class PackageInstaller: new_task.status = BuildStatus.INSTALLING self._push_task(new_task) - def _setup_install_dir(self, pkg: "spack.package_base.PackageBase") -> None: - """ - Create and ensure proper access controls for the install directory. - Write a small metadata file with the current spack environment. - - Args: - pkg: the package to be built and installed - """ - if not os.path.exists(pkg.spec.prefix): - path = spack.util.path.debug_padded_filter(pkg.spec.prefix) - tty.debug(f"Creating the installation directory {path}") - spack.store.STORE.layout.create_install_directory(pkg.spec) - else: - # Set the proper group for the prefix - group = prefs.get_package_group(pkg.spec) - if group: - fs.chgrp(pkg.spec.prefix, group) - - # Set the proper permissions. - # This has to be done after group because changing groups blows - # away the sticky group bit on the directory - mode = os.stat(pkg.spec.prefix).st_mode - perms = prefs.get_package_dir_permissions(pkg.spec) - if mode != perms: - os.chmod(pkg.spec.prefix, perms) - - # Ensure the metadata path exists as well - fs.mkdirp(spack.store.STORE.layout.metadata_path(pkg.spec), mode=perms) - - # Always write host environment - we assume this can change - spack.store.STORE.layout.write_host_environment(pkg.spec) - def _update_failed( - self, task: BuildTask, mark: bool = False, exc: Optional[BaseException] = None + self, task: Task, mark: bool = False, exc: Optional[BaseException] = None ) -> None: """ Update the task and transitive dependents as failed; optionally mark - externally as failed; and remove associated build tasks. + externally as failed; and remove associated tasks. Args: - task: the build task for the failed package + task: the task for the failed package mark: ``True`` if the package and its dependencies are to be marked as "failed", otherwise, ``False`` exc: optional exception if associated with the failure @@ -1806,19 +1906,19 @@ class PackageInstaller: if dep_id in self.build_tasks: tty.warn(f"Skipping build of {dep_id} since {pkg_id} failed") # Ensure the dependent's uninstalled dependents are - # up-to-date and their build tasks removed. + # up-to-date and their tasks removed. dep_task = self.build_tasks[dep_id] self._update_failed(dep_task, mark) self._remove_task(dep_id) else: - tty.debug(f"No build task for {dep_id} to skip since {pkg_id} failed") + tty.debug(f"No task for {dep_id} to skip since {pkg_id} failed") - def _update_installed(self, task: BuildTask) -> None: + def _update_installed(self, task: Task) -> None: """ - Mark the task as installed and ensure dependent build tasks are aware. + Mark the task as installed and ensure dependent tasks are aware. Args: - task (BuildTask): the build task for the installed package + task: the task for the installed package """ task.status = BuildStatus.INSTALLED self._flag_installed(task.pkg, task.dependents) @@ -1827,7 +1927,7 @@ class PackageInstaller: self, pkg: "spack.package_base.PackageBase", dependent_ids: Optional[Set[str]] = None ) -> None: """ - Flag the package as installed and ensure known by all build tasks of + Flag the package as installed and ensure known by all tasks of known dependents. Args: @@ -1855,7 +1955,7 @@ class PackageInstaller: dep_task = self.build_tasks[dep_id] self._push_task(dep_task.next_attempt(self.installed)) else: - tty.debug(f"{dep_id} has no build task to update for {pkg_id}'s success") + tty.debug(f"{dep_id} has no task to update for {pkg_id}'s success") def _init_queue(self) -> None: """Initialize the build queue from the list of build requests.""" @@ -1874,8 +1974,9 @@ class PackageInstaller: task = self.build_tasks[dep_id] for dependent_id in dependents.difference(task.dependents): task.add_dependent(dependent_id) + self.all_dependencies = all_dependencies - def _install_action(self, task: BuildTask) -> int: + def _install_action(self, task: Task) -> InstallAction: """ Determine whether the installation should be overwritten (if it already exists) or skipped (if has been handled by another process). @@ -2023,7 +2124,6 @@ class PackageInstaller: self._update_installed(task) path = spack.util.path.debug_padded_filter(pkg.prefix) _print_installed_pkg(path) - else: # At this point we've failed to get a write or a read # lock, which means another process has taken a write @@ -2063,8 +2163,6 @@ class PackageInstaller: # wrapper -- silence mypy OverwriteInstall(self, spack.store.STORE.db, task, install_status).install() # type: ignore[arg-type] # noqa: E501 - self._update_installed(task) - # If we installed then we should keep the prefix stop_before_phase = getattr(pkg, "stop_before_phase", None) last_phase = getattr(pkg, "last_phase", None) @@ -2108,7 +2206,9 @@ class PackageInstaller: ) # Terminate if requested to do so on the first failure. if self.fail_fast: - raise spack.error.InstallError(f"{fail_fast_err}: {str(exc)}", pkg=pkg) + raise spack.error.InstallError( + f"{fail_fast_err}: {str(exc)}", pkg=pkg + ) from exc # Terminate when a single build request has failed, or summarize errors later. if task.is_build_request: @@ -2124,7 +2224,8 @@ class PackageInstaller: # Perform basic task cleanup for the installed spec to # include downgrading the write to a read lock - self._cleanup_task(pkg) + if pkg.spec.installed: + self._cleanup_task(pkg) # Cleanup, which includes releasing all of the read locks self._cleanup_all_tasks() @@ -2423,7 +2524,7 @@ class OverwriteInstall: self, installer: PackageInstaller, database: spack.database.Database, - task: BuildTask, + task: Task, install_status: InstallStatus, ): self.installer = installer diff --git a/lib/spack/spack/rewiring.py b/lib/spack/spack/rewiring.py index 3b1a9ba451..ae7eb0a8d8 100644 --- a/lib/spack/spack/rewiring.py +++ b/lib/spack/spack/rewiring.py @@ -12,6 +12,7 @@ from collections import OrderedDict from llnl.util.symlink import readlink, symlink import spack.binary_distribution as bindist +import spack.deptypes as dt import spack.error import spack.hooks import spack.platforms @@ -52,6 +53,7 @@ def rewire_node(spec, explicit): its subgraph. Binaries, text, and links are all changed in accordance with the splice. The resulting package is then 'installed.'""" tempdir = tempfile.mkdtemp() + # copy anything installed to a temporary directory shutil.copytree(spec.build_spec.prefix, os.path.join(tempdir, spec.dag_hash())) @@ -59,8 +61,21 @@ def rewire_node(spec, explicit): # compute prefix-to-prefix for every node from the build spec to the spliced # spec prefix_to_prefix = OrderedDict({spec.build_spec.prefix: spec.prefix}) - for build_dep in spec.build_spec.traverse(root=False): - prefix_to_prefix[build_dep.prefix] = spec[build_dep.name].prefix + build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD)) + for s in bindist.deps_to_relocate(spec): + analog = s + if id(s) not in build_spec_ids: + analogs = [ + d + for d in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD) + if s._splice_match(d, self_root=spec, other_root=spec.build_spec) + ] + if analogs: + # Prefer same-name analogs and prefer higher versions + # This matches the preferences in Spec.splice, so we will find same node + analog = max(analogs, key=lambda a: (a.name == s.name, a.version)) + + prefix_to_prefix[analog.prefix] = s.prefix manifest = bindist.get_buildfile_manifest(spec.build_spec) platform = spack.platforms.by_name(spec.platform) diff --git a/lib/spack/spack/schema/concretizer.py b/lib/spack/spack/schema/concretizer.py index e1c4d64ce1..0b222d923e 100644 --- a/lib/spack/spack/schema/concretizer.py +++ b/lib/spack/spack/schema/concretizer.py @@ -55,6 +55,26 @@ properties: Dict[str, Any] = { "unify": { "oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["when_possible"]}] }, + "splice": { + "type": "object", + "additionalProperties": False, + "properties": { + "explicit": { + "type": "array", + "default": [], + "items": { + "type": "object", + "required": ["target", "replacement"], + "additionalProperties": False, + "properties": { + "target": {"type": "string"}, + "replacement": {"type": "string"}, + "transitive": {"type": "boolean", "default": False}, + }, + }, + } + }, + }, "duplicates": { "type": "object", "properties": { diff --git a/lib/spack/spack/solver/asp.py b/lib/spack/spack/solver/asp.py index 3fd8b9cc8d..92734e9afd 100644 --- a/lib/spack/spack/solver/asp.py +++ b/lib/spack/spack/solver/asp.py @@ -523,7 +523,12 @@ class Result: node = SpecBuilder.make_node(pkg=providers[0]) candidate = answer.get(node) - if candidate and candidate.satisfies(input_spec): + if candidate and candidate.build_spec.satisfies(input_spec): + if not candidate.satisfies(input_spec): + tty.warn( + "explicit splice configuration has caused the concretized spec" + f" {candidate} not to satisfy the input spec {input_spec}" + ) self._concrete_specs.append(answer[node]) self._concrete_specs_by_input[input_spec] = answer[node] else: @@ -3814,7 +3819,33 @@ class SpecBuilder: spack.version.git_ref_lookup.GitRefLookup(spec.fullname) ) - return self._specs + specs = self.execute_explicit_splices() + + return specs + + def execute_explicit_splices(self): + splice_config = spack.config.CONFIG.get("concretizer:splice:explicit", []) + splice_triples = [] + for splice_set in splice_config: + target = splice_set["target"] + replacement = spack.spec.Spec(splice_set["replacement"]) + assert replacement.abstract_hash + replacement.replace_hash() + transitive = splice_set.get("transitive", False) + splice_triples.append((target, replacement, transitive)) + + specs = {} + for key, spec in self._specs.items(): + current_spec = spec + for target, replacement, transitive in splice_triples: + if target in current_spec: + # matches root or non-root + # e.g. mvapich2%gcc + current_spec = current_spec.splice(replacement, transitive) + new_key = NodeArgument(id=key.id, pkg=current_spec.name) + specs[new_key] = current_spec + + return specs def _develop_specs_from_env(spec, env): diff --git a/lib/spack/spack/spec.py b/lib/spack/spack/spec.py index 030bb50913..d64507a9a1 100644 --- a/lib/spack/spack/spec.py +++ b/lib/spack/spack/spec.py @@ -4183,7 +4183,7 @@ class Spec: """Return set of virtuals provided by self in the context of root""" if root is self: # Could be using any virtual the package can provide - return set(self.package.virtuals_provided) + return set(v.name for v in self.package.virtuals_provided) hashes = [s.dag_hash() for s in root.traverse()] in_edges = set( @@ -4206,7 +4206,7 @@ class Spec: return True return bool( - self._virtuals_provided(self_root) + bool(self._virtuals_provided(self_root)) and self._virtuals_provided(self_root) <= other._virtuals_provided(other_root) ) @@ -4226,29 +4226,24 @@ class Spec: # Only set it if it hasn't been spliced before ancestor._build_spec = ancestor._build_spec or ancestor.copy() ancestor.clear_cached_hashes(ignore=(ht.package_hash.attr,)) + for edge in ancestor.edges_to_dependencies(depflag=dt.BUILD): + if edge.depflag & ~dt.BUILD: + edge.depflag &= ~dt.BUILD + else: + ancestor._dependencies[edge.spec.name].remove(edge) + edge.spec._dependents[ancestor.name].remove(edge) # For each direct dependent in the link/run graph, replace the dependency on # node with one on replacement - # For each build dependent, restrict the edge to build-only for edge in self.edges_from_dependents(): if edge.parent not in ancestors_in_context: continue - build_dep = edge.depflag & dt.BUILD - other_dep = edge.depflag & ~dt.BUILD - if build_dep: - parent_edge = [e for e in edge.parent._dependencies[self.name] if e.spec is self] - assert len(parent_edge) == 1 - - edge.depflag = dt.BUILD - parent_edge[0].depflag = dt.BUILD - else: - edge.parent._dependencies.edges[self.name].remove(edge) - self._dependents.edges[edge.parent.name].remove(edge) - if other_dep: - edge.parent._add_dependency(replacement, depflag=other_dep, virtuals=edge.virtuals) + edge.parent._dependencies.edges[self.name].remove(edge) + self._dependents.edges[edge.parent.name].remove(edge) + edge.parent._add_dependency(replacement, depflag=edge.depflag, virtuals=edge.virtuals) - def _splice_helper(self, replacement, self_root, other_root): + def _splice_helper(self, replacement): """Main loop of a transitive splice. The while loop around a traversal of self ensures that changes to self from previous @@ -4276,8 +4271,7 @@ class Spec: replacements_by_name[node.name].append(node) virtuals = node._virtuals_provided(root=replacement) for virtual in virtuals: - # Virtual may be spec or str, get name or return str - replacements_by_name[getattr(virtual, "name", virtual)].append(node) + replacements_by_name[virtual].append(node) changed = True while changed: @@ -4298,8 +4292,8 @@ class Spec: for virtual in node._virtuals_provided(root=self): analogs += [ r - for r in replacements_by_name[getattr(virtual, "name", virtual)] - if r._splice_match(node, self_root=self_root, other_root=other_root) + for r in replacements_by_name[virtual] + if node._splice_match(r, self_root=self, other_root=replacement) ] # No match, keep iterating over self @@ -4313,34 +4307,56 @@ class Spec: # No splice needed here, keep checking if analog == node: continue + node._splice_detach_and_add_dependents(analog, context=self) changed = True break - def splice(self, other, transitive): - """Splices dependency "other" into this ("target") Spec, and return the - result as a concrete Spec. - If transitive, then other and its dependencies will be extrapolated to - a list of Specs and spliced in accordingly. - For example, let there exist a dependency graph as follows: - T - | \ - Z<-H - In this example, Spec T depends on H and Z, and H also depends on Z. - Suppose, however, that we wish to use a different H, known as H'. This - function will splice in the new H' in one of two ways: - 1. transitively, where H' depends on the Z' it was built with, and the - new T* also directly depends on this new Z', or - 2. intransitively, where the new T* and H' both depend on the original - Z. - Since the Spec returned by this splicing function is no longer deployed - the same way it was built, any such changes are tracked by setting the - build_spec to point to the corresponding dependency from the original - Spec. - """ + def splice(self, other: "Spec", transitive: bool = True) -> "Spec": + """Returns a new, spliced concrete Spec with the "other" dependency and, + optionally, its dependencies. + + Args: + other: alternate dependency + transitive: include other's dependencies + + Returns: a concrete, spliced version of the current Spec + + When transitive is "True", use the dependencies from "other" to reconcile + conflicting dependencies. When transitive is "False", use dependencies from self. + + For example, suppose we have the following dependency graph: + + T + | \ + Z<-H + + Spec T depends on H and Z, and H also depends on Z. Now we want to use + a different H, called H'. This function can be used to splice in H' to + create a new spec, called T*. If H' was built with Z', then transitive + "True" will ensure H' and T* both depend on Z': + + T* + | \ + Z'<-H' + + If transitive is "False", then H' and T* will both depend on + the original Z, resulting in a new H'* + + T* + | \ + Z<-H'* + + Provenance of the build is tracked through the "build_spec" property + of the spliced spec and any correspondingly modified dependency specs. + The build specs are set to that of the original spec, so the original + spec's provenance is preserved unchanged.""" assert self.concrete assert other.concrete + if self._splice_match(other, self_root=self, other_root=other): + return other.copy() + if not any( node._splice_match(other, self_root=self, other_root=other) for node in self.traverse(root=False, deptype=dt.LINK | dt.RUN) @@ -4379,12 +4395,12 @@ class Spec: # Transitively splice any relevant nodes from new into base # This handles all shared dependencies between self and other - spec._splice_helper(replacement, self_root=self, other_root=other) + spec._splice_helper(replacement) else: # Do the same thing as the transitive splice, but reversed node_pairs = make_node_pairs(other, replacement) mask_build_deps(replacement) - replacement._splice_helper(spec, self_root=other, other_root=self) + replacement._splice_helper(spec) # Intransitively splice replacement into spec # This is very simple now that all shared dependencies have been handled @@ -4392,13 +4408,14 @@ class Spec: if node._splice_match(other, self_root=spec, other_root=other): node._splice_detach_and_add_dependents(replacement, context=spec) - # Set up build dependencies for modified nodes - # Also modify build_spec because the existing ones had build deps removed + # For nodes that were spliced, modify the build spec to ensure build deps are preserved + # For nodes that were not spliced, replace the build deps on the spec itself for orig, copy in node_pairs: - for edge in orig.edges_to_dependencies(depflag=dt.BUILD): - copy._add_dependency(edge.spec, depflag=dt.BUILD, virtuals=edge.virtuals) if copy._build_spec: copy._build_spec = orig.build_spec.copy() + else: + for edge in orig.edges_to_dependencies(depflag=dt.BUILD): + copy._add_dependency(edge.spec, depflag=dt.BUILD, virtuals=edge.virtuals) return spec @@ -4797,7 +4814,7 @@ class SpecfileReaderBase: virtuals=virtuals, ) if "build_spec" in node.keys(): - _, bhash, _ = cls.build_spec_from_node_dict(node, hash_type=hash_type) + _, bhash, _ = cls.extract_build_spec_info_from_node_dict(node, hash_type=hash_type) node_spec._build_spec = hash_dict[bhash]["node_spec"] return hash_dict[root_spec_hash]["node_spec"] @@ -4925,7 +4942,7 @@ class SpecfileV2(SpecfileReaderBase): return dep_hash, deptypes, hash_type, virtuals @classmethod - def build_spec_from_node_dict(cls, node, hash_type=ht.dag_hash.name): + def extract_build_spec_info_from_node_dict(cls, node, hash_type=ht.dag_hash.name): build_spec_dict = node["build_spec"] return build_spec_dict["name"], build_spec_dict[hash_type], hash_type diff --git a/lib/spack/spack/test/bindist.py b/lib/spack/spack/test/bindist.py index f2be8fd004..77b11ce98f 100644 --- a/lib/spack/spack/test/bindist.py +++ b/lib/spack/spack/test/bindist.py @@ -69,22 +69,6 @@ def cache_directory(tmpdir): @pytest.fixture(scope="module") -def mirror_dir(tmpdir_factory): - dir = tmpdir_factory.mktemp("mirror") - dir.ensure("build_cache", dir=True) - yield str(dir) - dir.join("build_cache").remove() - - -@pytest.fixture(scope="function") -def test_mirror(mirror_dir): - mirror_url = url_util.path_to_file_url(mirror_dir) - mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url) - yield mirror_dir - mirror_cmd("rm", "--scope=site", "test-mirror-func") - - -@pytest.fixture(scope="module") def config_directory(tmp_path_factory): # Copy defaults to a temporary "site" scope defaults_dir = tmp_path_factory.mktemp("test_configs") @@ -222,9 +206,9 @@ else: @pytest.mark.requires_executables(*args) @pytest.mark.maybeslow @pytest.mark.usefixtures( - "default_config", "cache_directory", "install_dir_default_layout", "test_mirror" + "default_config", "cache_directory", "install_dir_default_layout", "temporary_mirror" ) -def test_default_rpaths_create_install_default_layout(mirror_dir): +def test_default_rpaths_create_install_default_layout(temporary_mirror_dir): """ Test the creation and installation of buildcaches with default rpaths into the default directory layout scheme. @@ -237,13 +221,12 @@ def test_default_rpaths_create_install_default_layout(mirror_dir): install_cmd("--no-cache", sy_spec.name) # Create a buildache - buildcache_cmd("push", "-u", mirror_dir, cspec.name, sy_spec.name) - + buildcache_cmd("push", "-u", temporary_mirror_dir, cspec.name, sy_spec.name) # Test force overwrite create buildcache (-f option) - buildcache_cmd("push", "-uf", mirror_dir, cspec.name) + buildcache_cmd("push", "-uf", temporary_mirror_dir, cspec.name) # Create mirror index - buildcache_cmd("update-index", mirror_dir) + buildcache_cmd("update-index", temporary_mirror_dir) # List the buildcaches in the mirror buildcache_cmd("list", "-alv") @@ -271,9 +254,9 @@ def test_default_rpaths_create_install_default_layout(mirror_dir): @pytest.mark.maybeslow @pytest.mark.nomockstage @pytest.mark.usefixtures( - "default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror" + "default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror" ) -def test_default_rpaths_install_nondefault_layout(mirror_dir): +def test_default_rpaths_install_nondefault_layout(temporary_mirror_dir): """ Test the creation and installation of buildcaches with default rpaths into the non-default directory layout scheme. @@ -294,9 +277,9 @@ def test_default_rpaths_install_nondefault_layout(mirror_dir): @pytest.mark.maybeslow @pytest.mark.nomockstage @pytest.mark.usefixtures( - "default_config", "cache_directory", "install_dir_default_layout", "test_mirror" + "default_config", "cache_directory", "install_dir_default_layout", "temporary_mirror" ) -def test_relative_rpaths_install_default_layout(mirror_dir): +def test_relative_rpaths_install_default_layout(temporary_mirror_dir): """ Test the creation and installation of buildcaches with relative rpaths into the default directory layout scheme. @@ -323,9 +306,9 @@ def test_relative_rpaths_install_default_layout(mirror_dir): @pytest.mark.maybeslow @pytest.mark.nomockstage @pytest.mark.usefixtures( - "default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror" + "default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror" ) -def test_relative_rpaths_install_nondefault(mirror_dir): +def test_relative_rpaths_install_nondefault(temporary_mirror_dir): """ Test the installation of buildcaches with relativized rpaths into the non-default directory layout scheme. @@ -374,9 +357,9 @@ def test_push_and_fetch_keys(mock_gnupghome, tmp_path): @pytest.mark.maybeslow @pytest.mark.nomockstage @pytest.mark.usefixtures( - "default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror" + "default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror" ) -def test_built_spec_cache(mirror_dir): +def test_built_spec_cache(temporary_mirror_dir): """Because the buildcache list command fetches the buildcache index and uses it to populate the binary_distribution built spec cache, when this test calls get_mirrors_for_spec, it is testing the popluation of @@ -397,7 +380,7 @@ def fake_dag_hash(spec, length=None): return "tal4c7h4z0gqmixb1eqa92mjoybxn5l6"[:length] -@pytest.mark.usefixtures("install_mockery", "mock_packages", "mock_fetch", "test_mirror") +@pytest.mark.usefixtures("install_mockery", "mock_packages", "mock_fetch", "temporary_mirror") def test_spec_needs_rebuild(monkeypatch, tmpdir): """Make sure needs_rebuild properly compares remote hash against locally computed one, avoiding unnecessary rebuilds""" @@ -518,7 +501,7 @@ def test_generate_indices_exception(monkeypatch, tmp_path, capfd): @pytest.mark.usefixtures("mock_fetch", "install_mockery") -def test_update_sbang(tmpdir, test_mirror): +def test_update_sbang(tmpdir, temporary_mirror): """Test the creation and installation of buildcaches with default rpaths into the non-default directory layout scheme, triggering an update of the sbang. @@ -529,7 +512,7 @@ def test_update_sbang(tmpdir, test_mirror): old_spec_hash_str = "/{0}".format(old_spec.dag_hash()) # Need a fake mirror with *function* scope. - mirror_dir = test_mirror + mirror_dir = temporary_mirror # Assume all commands will concretize old_spec the same way. install_cmd("--no-cache", old_spec.name) diff --git a/lib/spack/spack/test/buildtask.py b/lib/spack/spack/test/buildtask.py index 4f018eb53e..3e90c9ad7e 100644 --- a/lib/spack/spack/test/buildtask.py +++ b/lib/spack/spack/test/buildtask.py @@ -27,17 +27,19 @@ def test_build_task_errors(install_mockery): # Using a concretized package now means the request argument is checked. spec.concretize() assert spec.concrete + with pytest.raises(TypeError, match="is not a valid build request"): inst.BuildTask(spec.package, None) # Using a valid package and spec, the next check is the status argument. request = inst.BuildRequest(spec.package, {}) + with pytest.raises(TypeError, match="is not a valid build status"): inst.BuildTask(spec.package, request, status="queued") # Now we can check that build tasks cannot be create when the status # indicates the task is/should've been removed. - with pytest.raises(spack.error.InstallError, match="Cannot create a build task"): + with pytest.raises(spack.error.InstallError, match="Cannot create a task"): inst.BuildTask(spec.package, request, status=inst.BuildStatus.REMOVED) # Also make sure to not accept an incompatible installed argument value. diff --git a/lib/spack/spack/test/cmd/env.py b/lib/spack/spack/test/cmd/env.py index 1dbc808ad8..014151688a 100644 --- a/lib/spack/spack/test/cmd/env.py +++ b/lib/spack/spack/test/cmd/env.py @@ -38,6 +38,7 @@ import spack.util.environment import spack.util.spack_json as sjson import spack.util.spack_yaml from spack.cmd.env import _env_create +from spack.installer import PackageInstaller from spack.main import SpackCommand, SpackCommandError from spack.spec import Spec from spack.stage import stage_prefix @@ -803,6 +804,38 @@ spack: assert not any(x.name == "hypre" for x in env_specs) +def test_lockfile_spliced_specs(environment_from_manifest, install_mockery): + """Test that an environment can round-trip a spliced spec.""" + # Create a local install for zmpi to splice in + # Default concretization is not using zmpi + zmpi = spack.spec.Spec("zmpi").concretized() + PackageInstaller([zmpi.package], fake=True).install() + + e1 = environment_from_manifest( + f""" +spack: + specs: + - mpileaks + concretizer: + splice: + explicit: + - target: mpi + replacement: zmpi/{zmpi.dag_hash()} +""" + ) + with e1: + e1.concretize() + e1.write() + + # By reading into a second environment, we force a round trip to json + e2 = _env_create("test2", init_file=e1.lock_path) + + # The one spec is mpileaks + for _, spec in e2.concretized_specs(): + assert spec.spliced + assert spec["mpi"].satisfies(zmpi) + + def test_init_from_lockfile(environment_from_manifest): """Test that an environment can be instantiated from a lockfile.""" e1 = environment_from_manifest( diff --git a/lib/spack/spack/test/concretize.py b/lib/spack/spack/test/concretize.py index 9f4d411aec..04807d6cde 100644 --- a/lib/spack/spack/test/concretize.py +++ b/lib/spack/spack/test/concretize.py @@ -2281,6 +2281,30 @@ class TestConcretize: edges = spec.edges_to_dependencies(name="callpath") assert len(edges) == 1 and edges[0].virtuals == () + @pytest.mark.parametrize("transitive", [True, False]) + def test_explicit_splices( + self, mutable_config, database_mutable_config, mock_packages, transitive, capfd + ): + mpich_spec = database_mutable_config.query("mpich")[0] + splice_info = { + "target": "mpi", + "replacement": f"/{mpich_spec.dag_hash()}", + "transitive": transitive, + } + spack.config.CONFIG.set("concretizer", {"splice": {"explicit": [splice_info]}}) + + spec = spack.spec.Spec("hdf5 ^zmpi").concretized() + + assert spec.satisfies(f"^mpich/{mpich_spec.dag_hash()}") + assert spec.build_spec.dependencies(name="zmpi", deptype="link") + assert not spec.build_spec.satisfies(f"^mpich/{mpich_spec.dag_hash()}") + assert not spec.dependencies(name="zmpi", deptype="link") + + captured = capfd.readouterr() + assert "Warning: explicit splice configuration has caused" in captured.err + assert "hdf5 ^zmpi" in captured.err + assert str(spec) in captured.err + @pytest.mark.db @pytest.mark.parametrize( "spec_str,mpi_name", diff --git a/lib/spack/spack/test/conftest.py b/lib/spack/spack/test/conftest.py index dc53f50688..613a51162a 100644 --- a/lib/spack/spack/test/conftest.py +++ b/lib/spack/spack/test/conftest.py @@ -62,8 +62,11 @@ import spack.util.web import spack.version from spack.fetch_strategy import URLFetchStrategy from spack.installer import PackageInstaller +from spack.main import SpackCommand from spack.util.pattern import Bunch +mirror_cmd = SpackCommand("mirror") + @pytest.fixture(autouse=True) def check_config_fixture(request): @@ -989,6 +992,38 @@ def install_mockery(temporary_store: spack.store.Store, mutable_config, mock_pac temporary_store.failure_tracker.clear_all() +@pytest.fixture(scope="module") +def temporary_mirror_dir(tmpdir_factory): + dir = tmpdir_factory.mktemp("mirror") + dir.ensure("build_cache", dir=True) + yield str(dir) + dir.join("build_cache").remove() + + +@pytest.fixture(scope="function") +def temporary_mirror(temporary_mirror_dir): + mirror_url = url_util.path_to_file_url(temporary_mirror_dir) + mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url) + yield temporary_mirror_dir + mirror_cmd("rm", "--scope=site", "test-mirror-func") + + +@pytest.fixture(scope="function") +def mutable_temporary_mirror_dir(tmpdir_factory): + dir = tmpdir_factory.mktemp("mirror") + dir.ensure("build_cache", dir=True) + yield str(dir) + dir.join("build_cache").remove() + + +@pytest.fixture(scope="function") +def mutable_temporary_mirror(mutable_temporary_mirror_dir): + mirror_url = url_util.path_to_file_url(mutable_temporary_mirror_dir) + mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url) + yield mutable_temporary_mirror_dir + mirror_cmd("rm", "--scope=site", "test-mirror-func") + + @pytest.fixture(scope="function") def temporary_store(tmpdir, request): """Hooks a temporary empty store for the test function.""" diff --git a/lib/spack/spack/test/installer.py b/lib/spack/spack/test/installer.py index f2d9df672f..bf231c0f2a 100644 --- a/lib/spack/spack/test/installer.py +++ b/lib/spack/spack/test/installer.py @@ -29,6 +29,7 @@ import spack.spec import spack.store import spack.util.lock as lk from spack.installer import PackageInstaller +from spack.main import SpackCommand def _mock_repo(root, namespace): @@ -640,6 +641,88 @@ def test_prepare_for_install_on_installed(install_mockery, monkeypatch): installer._prepare_for_install(task) +def test_installer_init_requests(install_mockery): + """Test of installer initial requests.""" + spec_name = "dependent-install" + with spack.config.override("config:install_missing_compilers", True): + installer = create_installer([spec_name], {}) + + # There is only one explicit request in this case + assert len(installer.build_requests) == 1 + request = installer.build_requests[0] + assert request.pkg.name == spec_name + + +@pytest.mark.parametrize("transitive", [True, False]) +def test_install_spliced(install_mockery, mock_fetch, monkeypatch, capsys, transitive): + """Test installing a spliced spec""" + spec = spack.spec.Spec("splice-t").concretized() + dep = spack.spec.Spec("splice-h+foo").concretized() + + # Do the splice. + out = spec.splice(dep, transitive) + installer = create_installer([out], {"verbose": True, "fail_fast": True}) + installer.install() + for node in out.traverse(): + assert node.installed + assert node.build_spec.installed + + +@pytest.mark.parametrize("transitive", [True, False]) +def test_install_spliced_build_spec_installed(install_mockery, capfd, mock_fetch, transitive): + """Test installing a spliced spec with the build spec already installed""" + spec = spack.spec.Spec("splice-t").concretized() + dep = spack.spec.Spec("splice-h+foo").concretized() + + # Do the splice. + out = spec.splice(dep, transitive) + PackageInstaller([out.build_spec.package]).install() + + installer = create_installer([out], {"verbose": True, "fail_fast": True}) + installer._init_queue() + for _, task in installer.build_pq: + assert isinstance(task, inst.RewireTask if task.pkg.spec.spliced else inst.BuildTask) + installer.install() + for node in out.traverse(): + assert node.installed + assert node.build_spec.installed + + +@pytest.mark.not_on_windows("lacking windows support for binary installs") +@pytest.mark.parametrize("transitive", [True, False]) +@pytest.mark.parametrize( + "root_str", ["splice-t^splice-h~foo", "splice-h~foo", "splice-vt^splice-a"] +) +def test_install_splice_root_from_binary( + install_mockery, mock_fetch, mutable_temporary_mirror, transitive, root_str +): + """Test installing a spliced spec with the root available in binary cache""" + # Test splicing and rewiring a spec with the same name, different hash. + original_spec = spack.spec.Spec(root_str).concretized() + spec_to_splice = spack.spec.Spec("splice-h+foo").concretized() + + PackageInstaller([original_spec.package, spec_to_splice.package]).install() + + out = original_spec.splice(spec_to_splice, transitive) + + buildcache = SpackCommand("buildcache") + buildcache( + "push", + "--unsigned", + "--update-index", + mutable_temporary_mirror, + str(original_spec), + str(spec_to_splice), + ) + + uninstall = SpackCommand("uninstall") + uninstall("-ay") + + PackageInstaller([out.package], unsigned=True).install() + + assert len(spack.store.STORE.db.query()) == len(list(out.traverse())) + + def test_install_task_use_cache(install_mockery, monkeypatch): installer = create_installer(["trivial-install-test-package"], {}) request = installer.build_requests[0] @@ -650,6 +733,33 @@ def test_install_task_use_cache(install_mockery, monkeypatch): assert request.pkg_id in installer.installed +def test_install_task_requeue_build_specs(install_mockery, monkeypatch, capfd): + """Check that a missing build_spec spec is added by _install_task.""" + + # This test also ensures coverage of most of the new + # _requeue_with_build_spec_tasks method. + def _missing(*args, **kwargs): + return inst.ExecuteResult.MISSING_BUILD_SPEC + + # Set the configuration to ensure _requeue_with_build_spec_tasks actually + # does something. + with spack.config.override("config:install_missing_compilers", True): + installer = create_installer(["depb"], {}) + installer._init_queue() + request = installer.build_requests[0] + task = create_build_task(request.pkg) + + # Drop one of the specs so its task is missing before _install_task + popped_task = installer._pop_task() + assert inst.package_id(popped_task.pkg.spec) not in installer.build_tasks + + monkeypatch.setattr(task, "execute", _missing) + installer._install_task(task, None) + + # Ensure the dropped task/spec was added back by _install_task + assert inst.package_id(popped_task.pkg.spec) in installer.build_tasks + + def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys): """Test _release_lock for supposed write lock with exception.""" installer = create_installer(["trivial-install-test-package"], {}) @@ -745,8 +855,10 @@ def test_setup_install_dir_grp(install_mockery, monkeypatch, capfd): monkeypatch.setattr(prefs, "get_package_group", _get_group) monkeypatch.setattr(fs, "chgrp", _chgrp) - installer = create_installer(["trivial-install-test-package"], {}) - spec = installer.build_requests[0].pkg.spec + build_task = create_build_task( + spack.spec.Spec("trivial-install-test-package").concretized().package + ) + spec = build_task.request.pkg.spec fs.touchp(spec.prefix) metadatadir = spack.store.STORE.layout.metadata_path(spec) @@ -756,7 +868,7 @@ def test_setup_install_dir_grp(install_mockery, monkeypatch, capfd): metadatadir = None # Should fail with a "not a directory" error with pytest.raises(OSError, match=metadatadir): - installer._setup_install_dir(spec.package) + build_task._setup_install_dir(spec.package) out = str(capfd.readouterr()[0]) @@ -843,79 +955,74 @@ def test_install_failed_not_fast(install_mockery, monkeypatch, capsys): assert "Skipping build of pkg-a" in out -def test_install_fail_on_interrupt(install_mockery, monkeypatch): +def _interrupt(installer, task, install_status, **kwargs): + if task.pkg.name == "pkg-a": + raise KeyboardInterrupt("mock keyboard interrupt for pkg-a") + else: + return installer._real_install_task(task, None) + # installer.installed.add(task.pkg.name) + + +def test_install_fail_on_interrupt(install_mockery, mock_fetch, monkeypatch): """Test ctrl-c interrupted install.""" spec_name = "pkg-a" err_msg = "mock keyboard interrupt for {0}".format(spec_name) - - def _interrupt(installer, task, install_status, **kwargs): - if task.pkg.name == spec_name: - raise KeyboardInterrupt(err_msg) - else: - installer.installed.add(task.pkg.name) - installer = create_installer([spec_name], {}) - + setattr(inst.PackageInstaller, "_real_install_task", inst.PackageInstaller._install_task) # Raise a KeyboardInterrupt error to trigger early termination monkeypatch.setattr(inst.PackageInstaller, "_install_task", _interrupt) with pytest.raises(KeyboardInterrupt, match=err_msg): installer.install() - assert "pkg-b" in installer.installed # ensure dependency of pkg-a is 'installed' - assert spec_name not in installer.installed + assert not any(i.startswith("pkg-a-") for i in installer.installed) + assert any( + i.startswith("pkg-b-") for i in installer.installed + ) # ensure dependency of a is 'installed' -def test_install_fail_single(install_mockery, monkeypatch): - """Test expected results for failure of single package.""" - spec_name = "pkg-a" - err_msg = "mock internal package build error for {0}".format(spec_name) +class MyBuildException(Exception): + pass - class MyBuildException(Exception): - pass - def _install(installer, task, install_status, **kwargs): - if task.pkg.name == spec_name: - raise MyBuildException(err_msg) - else: - installer.installed.add(task.pkg.name) +def _install_fail_my_build_exception(installer, task, install_status, **kwargs): + print(task, task.pkg.name) + if task.pkg.name == "pkg-a": + raise MyBuildException("mock internal package build error for pkg-a") + else: + # No need for more complex logic here because no splices + task.execute(install_status) + installer._update_installed(task) - installer = create_installer([spec_name], {}) + +def test_install_fail_single(install_mockery, mock_fetch, monkeypatch): + """Test expected results for failure of single package.""" + installer = create_installer(["pkg-a"], {}) # Raise a KeyboardInterrupt error to trigger early termination - monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install) + monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install_fail_my_build_exception) - with pytest.raises(MyBuildException, match=err_msg): + with pytest.raises(MyBuildException, match="mock internal package build error for pkg-a"): installer.install() - assert "pkg-b" in installer.installed # ensure dependency of a is 'installed' - assert spec_name not in installer.installed + # ensure dependency of a is 'installed' and a is not + assert any(pkg_id.startswith("pkg-b-") for pkg_id in installer.installed) + assert not any(pkg_id.startswith("pkg-a-") for pkg_id in installer.installed) -def test_install_fail_multi(install_mockery, monkeypatch): +def test_install_fail_multi(install_mockery, mock_fetch, monkeypatch): """Test expected results for failure of multiple packages.""" - spec_name = "pkg-c" - err_msg = "mock internal package build error" - - class MyBuildException(Exception): - pass - - def _install(installer, task, install_status, **kwargs): - if task.pkg.name == spec_name: - raise MyBuildException(err_msg) - else: - installer.installed.add(task.pkg.name) - - installer = create_installer([spec_name, "pkg-a"], {}) + installer = create_installer(["pkg-a", "pkg-c"], {}) # Raise a KeyboardInterrupt error to trigger early termination - monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install) + monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install_fail_my_build_exception) with pytest.raises(spack.error.InstallError, match="Installation request failed"): installer.install() - assert "pkg-a" in installer.installed # ensure the the second spec installed - assert spec_name not in installer.installed + # ensure the the second spec installed but not the first + assert any(pkg_id.startswith("pkg-c-") for pkg_id in installer.installed) + assert not any(pkg_id.startswith("pkg-a-") for pkg_id in installer.installed) def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys): diff --git a/lib/spack/spack/test/rewiring.py b/lib/spack/spack/test/rewiring.py index 4c770e1988..9cf16ce6c2 100644 --- a/lib/spack/spack/test/rewiring.py +++ b/lib/spack/spack/test/rewiring.py @@ -9,6 +9,7 @@ import sys import pytest +import spack.deptypes as dt import spack.rewiring import spack.store from spack.installer import PackageInstaller @@ -22,6 +23,18 @@ else: args.extend(["g++", "patchelf"]) +def check_spliced_spec_prefixes(spliced_spec): + """check the file in the prefix has the correct paths""" + for node in spliced_spec.traverse(root=True): + text_file_path = os.path.join(node.prefix, node.name) + with open(text_file_path, "r") as f: + text = f.read() + print(text) + for modded_spec in node.traverse(root=True, deptype=dt.ALL & ~dt.BUILD): + print(modded_spec) + assert modded_spec.prefix in text + + @pytest.mark.requires_executables(*args) @pytest.mark.parametrize("transitive", [True, False]) def test_rewire_db(mock_fetch, install_mockery, transitive): @@ -42,13 +55,8 @@ def test_rewire_db(mock_fetch, install_mockery, transitive): installed_in_db = rec.installed if rec else False assert installed_in_db - # check the file in the prefix has the correct paths - for node in spliced_spec.traverse(root=True): - text_file_path = os.path.join(node.prefix, node.name) - with open(text_file_path, "r") as f: - text = f.read() - for modded_spec in node.traverse(root=True, deptype=("link", "run")): - assert modded_spec.prefix in text + # check for correct prefix paths + check_spliced_spec_prefixes(spliced_spec) @pytest.mark.requires_executables(*args) @@ -150,3 +158,26 @@ def test_rewire_not_installed_fails(mock_fetch, install_mockery): match="failed due to missing install of build spec", ): spack.rewiring.rewire(spliced_spec) + + +def test_rewire_virtual(mock_fetch, install_mockery): + """Check installed package can successfully splice an alternate virtual implementation""" + dep = "splice-a" + alt_dep = "splice-h" + + spec = Spec(f"splice-vt^{dep}").concretized() + alt_spec = Spec(alt_dep).concretized() + + PackageInstaller([spec.package, alt_spec.package]).install() + + spliced_spec = spec.splice(alt_spec, True) + spack.rewiring.rewire(spliced_spec) + + # Confirm the original spec still has the original virtual implementation. + assert spec.satisfies(f"^{dep}") + + # Confirm the spliced spec uses the new virtual implementation. + assert spliced_spec.satisfies(f"^{alt_dep}") + + # check for correct prefix paths + check_spliced_spec_prefixes(spliced_spec) diff --git a/lib/spack/spack/test/spec_semantics.py b/lib/spack/spack/test/spec_semantics.py index 88414e7a29..a821c53f2f 100644 --- a/lib/spack/spack/test/spec_semantics.py +++ b/lib/spack/spack/test/spec_semantics.py @@ -1056,12 +1056,11 @@ class TestSpecSemantics: spliced = a_red.splice(c_blue, transitive=False) assert spliced.satisfies( "pkg-a color=red ^pkg-b color=red ^pkg-c color=blue " - "^pkg-d color=red ^pkg-e color=red ^pkg-f color=blue ^pkg-g@3 color=blue" - ) - assert set(spliced.dependencies(deptype=dt.BUILD)) == set( - a_red.dependencies(deptype=dt.BUILD) + "^pkg-d color=red ^pkg-e color=red ^pkg-f color=blue ^pkg-g@2 color=red" ) + assert set(spliced.dependencies(deptype=dt.BUILD)) == set() assert spliced.build_spec == a_red + # We cannot check spliced["b"].build_spec is spliced["b"] because Spec.__getitem__ creates # a new wrapper object on each invocation. So we select once and check on that object # For the rest of the unchanged specs we will just check the s._build_spec is None. @@ -1072,11 +1071,9 @@ class TestSpecSemantics: assert spliced["pkg-c"].satisfies( "pkg-c color=blue ^pkg-d color=red ^pkg-e color=red " - "^pkg-f color=blue ^pkg-g@3 color=blue" - ) - assert set(spliced["pkg-c"].dependencies(deptype=dt.BUILD)) == set( - c_blue.dependencies(deptype=dt.BUILD) + "^pkg-f color=blue ^pkg-g@2 color=red" ) + assert set(spliced["pkg-c"].dependencies(deptype=dt.BUILD)) == set() assert spliced["pkg-c"].build_spec == c_blue assert set(spliced["pkg-c"].dependents()) == {spliced} @@ -1101,14 +1098,12 @@ class TestSpecSemantics: # Build dependent edge to f because f originally dependended on the e this was copied from assert set(spliced["pkg-e"].dependents(deptype=dt.BUILD)) == {spliced["pkg-b"]} - assert spliced["pkg-f"].satisfies("pkg-f color=blue ^pkg-e color=red ^pkg-g@3 color=blue") - assert set(spliced["pkg-f"].dependencies(deptype=dt.BUILD)) == set( - c_blue["pkg-f"].dependencies(deptype=dt.BUILD) - ) + assert spliced["pkg-f"].satisfies("pkg-f color=blue ^pkg-e color=red ^pkg-g@2 color=red") + assert set(spliced["pkg-f"].dependencies(deptype=dt.BUILD)) == set() assert spliced["pkg-f"].build_spec == c_blue["pkg-f"] assert set(spliced["pkg-f"].dependents()) == {spliced["pkg-c"]} - # spliced["g"] is g3, but spliced["b"]["g"] is g1 + # spliced["pkg-g"] is g2, but spliced["pkg-b"]["pkg-g"] is g1 assert spliced["pkg-g"] == a_red["pkg-g"] assert spliced["pkg-g"]._build_spec is None assert set(spliced["pkg-g"].dependents(deptype=dt.LINK)) == { @@ -1117,7 +1112,6 @@ class TestSpecSemantics: spliced["pkg-f"], a_red["pkg-c"], } - assert set(spliced["pkg-g"].dependents(deptype=dt.BUILD)) == {spliced, a_red["pkg-c"]} assert spliced["pkg-b"]["pkg-g"] == a_red["pkg-b"]["pkg-g"] assert spliced["pkg-b"]["pkg-g"]._build_spec is None @@ -1131,14 +1125,7 @@ class TestSpecSemantics: # traverse_edges creates a synthetic edge with no deptypes to the root if edge.depflag: depflag = dt.LINK - if (edge.parent.name, edge.spec.name) not in [ - ("pkg-a", "pkg-c"), # These are the spliced edges - ("pkg-c", "pkg-d"), - ("pkg-f", "pkg-e"), - ("pkg-c", "pkg-g"), - ("pkg-f", "pkg-g"), - ("pkg-c", "pkg-f"), # ancestor to spliced edge - ]: + if not edge.parent.spliced: depflag |= dt.BUILD assert edge.depflag == depflag @@ -1150,21 +1137,17 @@ class TestSpecSemantics: "pkg-a color=red ^pkg-b color=red ^pkg-c color=blue ^pkg-d color=blue " "^pkg-e color=blue ^pkg-f color=blue ^pkg-g@3 color=blue" ) - assert set(spliced.dependencies(deptype=dt.BUILD)) == set( - a_red.dependencies(deptype=dt.BUILD) - ) + assert set(spliced.dependencies(deptype=dt.BUILD)) == set() assert spliced.build_spec == a_red assert spliced["pkg-b"].satisfies( "pkg-b color=red ^pkg-d color=blue ^pkg-e color=blue ^pkg-g@2 color=blue" ) - assert set(spliced["pkg-b"].dependencies(deptype=dt.BUILD)) == set( - a_red["pkg-b"].dependencies(deptype=dt.BUILD) - ) + assert set(spliced["pkg-b"].dependencies(deptype=dt.BUILD)) == set() assert spliced["pkg-b"].build_spec == a_red["pkg-b"] assert set(spliced["pkg-b"].dependents()) == {spliced} - # We cannot check spliced["b"].build_spec is spliced["b"] because Spec.__getitem__ creates + # We cannot check spliced["c"].build_spec is spliced["c"] because Spec.__getitem__ creates # a new wrapper object on each invocation. So we select once and check on that object # For the rest of the unchanged specs we will just check the s._build_spec is None. c = spliced["pkg-c"] @@ -1211,17 +1194,7 @@ class TestSpecSemantics: # traverse_edges creates a synthetic edge with no deptypes to the root if edge.depflag: depflag = dt.LINK - if (edge.parent.name, edge.spec.name) not in [ - ("pkg-a", "pkg-c"), # These are the spliced edges - ("pkg-a", "pkg-g"), - ("pkg-b", "pkg-d"), - ("pkg-b", "pkg-e"), - ("pkg-b", "pkg-g"), - ( - "pkg-a", - "pkg-b", - ), # This edge not spliced, but b was spliced invalidating edge - ]: + if not edge.parent.spliced: depflag |= dt.BUILD assert edge.depflag == depflag @@ -1365,10 +1338,10 @@ class TestSpecSemantics: @pytest.mark.parametrize("transitive", [True, False]) def test_splice_swap_names_mismatch_virtuals(self, default_mock_concretization, transitive): - spec = default_mock_concretization("splice-t") - dep = default_mock_concretization("splice-vh+foo") + vt = default_mock_concretization("splice-vt") + vh = default_mock_concretization("splice-vh+foo") with pytest.raises(spack.spec.SpliceError, match="virtual"): - spec.splice(dep, transitive) + vt.splice(vh, transitive) def test_spec_override(self): init_spec = Spec("pkg-a foo=baz foobar=baz cflags=-O3 cxxflags=-O1") |