diff options
author | Harmen Stoppels <me@harmenstoppels.nl> | 2023-12-20 11:31:41 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-12-20 11:31:41 +0100 |
commit | 16e27ba4a6552f312b8982be0c32c390f4f8479a (patch) | |
tree | ed44c7ae6da0ef84a1bc9bdcc15198781b1d30c9 /lib | |
parent | 2fda288cc56e6df5d036419577358dfe17d270d1 (diff) | |
download | spack-16e27ba4a6552f312b8982be0c32c390f4f8479a.tar.gz spack-16e27ba4a6552f312b8982be0c32c390f4f8479a.tar.bz2 spack-16e27ba4a6552f312b8982be0c32c390f4f8479a.tar.xz spack-16e27ba4a6552f312b8982be0c32c390f4f8479a.zip |
`spack buildcache push --tag`: create container image with multiple roots (#41077)
This PR adds a flag `--tag/-t` to `buildcache push`, which you can use like
```
$ spack mirror add my-oci-registry oci://example.com/hello/world
$ spack -e my_env buildcache push --base-image ubuntu:22.04 --tag my_custom_tag my-oci-registry
```
and lets users ship a full, installed environment as a minimal container image where each image layer is one Spack package, on top of a base image of choice. The image can then be used as
```
$ docker run -it --rm example.com/hello/world:my_custom_tag
```
Apart from environments, users can also pick arbitrary installed spec from their database, for instance:
```
$ spack buildcache push --base-image ubuntu:22.04 --tag some_specs my-oci-registry gcc@12 cmake
$ docker run -it --rm example.com/hello/world:some_specs
```
It has many advantages over `spack containerize`:
1. No external tools required (`docker`, `buildah`, ...)
2. Creates images from locally installed Spack packages (No need to rebuild inside `docker build`, where troubleshooting build failures is notoriously hard)
3. No need for multistage builds (Spack just tarballs existing installations of runtime deps)
4. Reduced storage size / composability: when pushing multiple environments with common specs, container image layers are shared.
5. Automatic build cache: later `spack install` of the env elsewhere speeds up since the containerized environment is a build cache
Diffstat (limited to 'lib')
-rw-r--r-- | lib/spack/docs/containers.rst | 151 | ||||
-rw-r--r-- | lib/spack/spack/cmd/buildcache.py | 198 | ||||
-rw-r--r-- | lib/spack/spack/test/conftest.py | 23 | ||||
-rw-r--r-- | lib/spack/spack/test/oci/integration_test.py | 43 |
4 files changed, 294 insertions, 121 deletions
diff --git a/lib/spack/docs/containers.rst b/lib/spack/docs/containers.rst index acf48e3eae..0033a081d3 100644 --- a/lib/spack/docs/containers.rst +++ b/lib/spack/docs/containers.rst @@ -9,34 +9,95 @@ Container Images ================ -Spack :ref:`environments` are a great tool to create container images, but -preparing one that is suitable for production requires some more boilerplate -than just: +Spack :ref:`environments` can easily be turned into container images. This page +outlines two ways in which this can be done: -.. code-block:: docker +1. By installing the environment on the host system, and copying the installations + into the container image. This approach does not require any tools like Docker + or Singularity to be installed. +2. By generating a Docker or Singularity recipe that can be used to build the + container image. In this approach, Spack builds the software inside the + container runtime, not on the host system. - COPY spack.yaml /environment - RUN spack -e /environment install +The first approach is easiest if you already have an installed environment, +the second approach gives more control over the container image. + +--------------------------- +From existing installations +--------------------------- + +If you already have a Spack environment installed on your system, you can +share the binaries as an OCI compatible container image. To get started you +just have to configure and OCI registry and run ``spack buildcache push``. + +.. code-block:: console + + # Create and install an environment in the current directory + spack env create -d . + spack -e . add pkg-a pkg-b + spack -e . install -Additional actions may be needed to minimize the size of the -container, or to update the system software that is installed in the base -image, or to set up a proper entrypoint to run the image. These tasks are -usually both necessary and repetitive, so Spack comes with a command -to generate recipes for container images starting from a ``spack.yaml``. + # Configure the registry + spack -e . mirror add --oci-username ... --oci-password ... container-registry oci://example.com/name/image -.. seealso:: + # Push the image + spack -e . buildcache push --base-image ubuntu:22.04 --tag my_env container-registry + +The resulting container image can then be run as follows: + +.. code-block:: console + + $ docker run -it example.com/name/image:my_env + +The image generated by Spack consists of the specified base image with each package from the +environment as a separate layer on top. The image is minimal by construction, it only contains the +environment roots and its runtime dependencies. + +.. note:: - This page is a reference for generating recipes to build container images. - It means that your environment is built from scratch inside the container - runtime. + When using registries like GHCR and Docker Hub, the ``--oci-password`` flag is not + the password for your account, but a personal access token you need to generate separately. + +The specified ``--base-image`` should have a libc that is compatible with the host system. +For example if your host system is Ubuntu 20.04, you can use ``ubuntu:20.04``, ``ubuntu:22.04`` +or newer: the libc in the container image must be at least the version of the host system, +assuming ABI compatibility. It is also perfectly fine to use a completely different +Linux distribution as long as the libc is compatible. + +For convenience, Spack also turns the OCI registry into a :ref:`build cache <binary_caches_oci>`, +so that future ``spack install`` of the environment will simply pull the binaries from the +registry instead of doing source builds. + +.. note:: + + When generating container images in CI, the approach above is recommended when CI jobs + already run in a sandboxed environment. You can simply use ``spack`` directly + in the CI job and push the resulting image to a registry. Subsequent CI jobs should + run faster because Spack can install from the same registry instead of rebuilding from + sources. + +--------------------------------------------- +Generating recipes for Docker and Singularity +--------------------------------------------- + +Apart from copying existing installations into container images, Spack can also +generate recipes for container images. This is useful if you want to run Spack +itself in a sandboxed environment instead of on the host system. + +Since recipes need a little bit more boilerplate than + +.. code-block:: docker + + COPY spack.yaml /environment + RUN spack -e /environment install - Since v0.21, Spack can also create container images from existing package installations - on your host system. See :ref:`binary_caches_oci` for more information on - that topic. +Spack provides a command to generate customizable recipes for container images. Customizations +include minimizing the size of the image, installing packages in the base image using the system +package manager, and setting up a proper entrypoint to run the image. --------------------- +~~~~~~~~~~~~~~~~~~~~ A Quick Introduction --------------------- +~~~~~~~~~~~~~~~~~~~~ Consider having a Spack environment like the following: @@ -47,8 +108,8 @@ Consider having a Spack environment like the following: - gromacs+mpi - mpich -Producing a ``Dockerfile`` from it is as simple as moving to the directory -where the ``spack.yaml`` file is stored and giving the following command: +Producing a ``Dockerfile`` from it is as simple as changing directories to +where the ``spack.yaml`` file is stored and running the following command: .. code-block:: console @@ -114,9 +175,9 @@ configuration are discussed in details in the sections below. .. _container_spack_images: --------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~ Spack Images on Docker Hub --------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~ Docker images with Spack preinstalled and ready to be used are built when a release is tagged, or nightly on ``develop``. The images @@ -186,9 +247,9 @@ by Spack use them as default base images for their ``build`` stage, even though handles to use custom base images provided by users are available to accommodate complex use cases. ---------------------------------- -Creating Images From Environments ---------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Configuring the Container Recipe +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Any Spack Environment can be used for the automatic generation of container recipes. Sensible defaults are provided for things like the base image or the @@ -229,18 +290,18 @@ under the ``container`` attribute of environments: A detailed description of the options available can be found in the :ref:`container_config_options` section. -------------------- +~~~~~~~~~~~~~~~~~~~ Setting Base Images -------------------- +~~~~~~~~~~~~~~~~~~~ The ``images`` subsection is used to select both the image where Spack builds the software and the image where the built software is installed. This attribute can be set in different ways and which one to use depends on the use case at hand. -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +"""""""""""""""""""""""""""""""""""""""" Use Official Spack Images From Dockerhub -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +"""""""""""""""""""""""""""""""""""""""" To generate a recipe that uses an official Docker image from the Spack organization to build the software and the corresponding official OS image @@ -445,9 +506,9 @@ responsibility to ensure that: Therefore we don't recommend its use in cases that can be otherwise covered by the simplified mode shown first. ----------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Singularity Definition Files ----------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In addition to producing recipes in ``Dockerfile`` format Spack can produce Singularity Definition Files by just changing the value of the ``format`` @@ -468,9 +529,9 @@ attribute: The minimum version of Singularity required to build a SIF (Singularity Image Format) image from the recipes generated by Spack is ``3.5.3``. ------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Extending the Jinja2 Templates ------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Dockerfile and the Singularity definition file that Spack can generate are based on a few Jinja2 templates that are rendered according to the environment being containerized. @@ -591,9 +652,9 @@ The recipe that gets generated contains the two extra instruction that we added .. _container_config_options: ------------------------ +~~~~~~~~~~~~~~~~~~~~~~~ Configuration Reference ------------------------ +~~~~~~~~~~~~~~~~~~~~~~~ The tables below describe all the configuration options that are currently supported to customize the generation of container recipes: @@ -690,13 +751,13 @@ to customize the generation of container recipes: - Description string - No --------------- +~~~~~~~~~~~~~~ Best Practices --------------- +~~~~~~~~~~~~~~ -^^^ +""" MPI -^^^ +""" Due to the dependency on Fortran for OpenMPI, which is the spack default implementation, consider adding ``gfortran`` to the ``apt-get install`` list. @@ -707,9 +768,9 @@ For execution on HPC clusters, it can be helpful to import the docker image into Singularity in order to start a program with an *external* MPI. Otherwise, also add ``openssh-server`` to the ``apt-get install`` list. -^^^^ +"""" CUDA -^^^^ +"""" Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on Ubuntu. Please see `their instructions <https://hub.docker.com/r/nvidia/cuda/>`_. Avoid double-installing CUDA by adding, e.g. @@ -728,9 +789,9 @@ to your ``spack.yaml``. Users will either need ``nvidia-docker`` or e.g. Singularity to *execute* device kernels. -^^^^^^^^^^^^^^^^^^^^^^^^^ +""""""""""""""""""""""""" Docker on Windows and OSX -^^^^^^^^^^^^^^^^^^^^^^^^^ +""""""""""""""""""""""""" On Mac OS and Windows, docker runs on a hypervisor that is not allocated much memory by default, and some spack packages may fail to build due to lack of diff --git a/lib/spack/spack/cmd/buildcache.py b/lib/spack/spack/cmd/buildcache.py index b591636ba2..75011a3e9b 100644 --- a/lib/spack/spack/cmd/buildcache.py +++ b/lib/spack/spack/cmd/buildcache.py @@ -37,6 +37,7 @@ import spack.user_environment import spack.util.crypto import spack.util.url as url_util import spack.util.web as web_util +from spack import traverse from spack.build_environment import determine_number_of_jobs from spack.cmd import display_specs from spack.cmd.common import arguments @@ -122,7 +123,14 @@ def setup_parser(subparser: argparse.ArgumentParser): help="stop pushing on first failure (default is best effort)", ) push.add_argument( - "--base-image", default=None, help="specify the base image for the buildcache. " + "--base-image", default=None, help="specify the base image for the buildcache" + ) + push.add_argument( + "--tag", + "-t", + default=None, + help="when pushing to an OCI registry, tag an image containing all root specs and their " + "runtime dependencies", ) arguments.add_common_arguments(push, ["specs", "jobs"]) push.set_defaults(func=push_fn) @@ -331,9 +339,9 @@ def push_fn(args): ) if args.specs or args.spec_file: - specs = _matching_specs(spack.cmd.parse_specs(args.specs or args.spec_file)) + roots = _matching_specs(spack.cmd.parse_specs(args.specs or args.spec_file)) else: - specs = spack.cmd.require_active_env("buildcache push").all_specs() + roots = spack.cmd.require_active_env(cmd_name="buildcache push").concrete_roots() if args.allow_root: tty.warn( @@ -344,9 +352,9 @@ def push_fn(args): # Check if this is an OCI image. try: - image_ref = spack.oci.oci.image_from_mirror(mirror) + target_image = spack.oci.oci.image_from_mirror(mirror) except ValueError: - image_ref = None + target_image = None push_url = mirror.push_url @@ -357,7 +365,7 @@ def push_fn(args): unsigned = not (args.key or args.signed) # For OCI images, we require dependencies to be pushed for now. - if image_ref: + if target_image: if "dependencies" not in args.things_to_install: tty.die("Dependencies must be pushed for OCI images.") if not unsigned: @@ -368,7 +376,7 @@ def push_fn(args): # This is a list of installed, non-external specs. specs = bindist.specs_to_be_packaged( - specs, + roots, root="package" in args.things_to_install, dependencies="dependencies" in args.things_to_install, ) @@ -381,11 +389,35 @@ def push_fn(args): failed = [] # TODO: unify this logic in the future. - if image_ref: + if target_image: + base_image = ImageReference.from_string(args.base_image) if args.base_image else None with tempfile.TemporaryDirectory( dir=spack.stage.get_stage_root() ) as tmpdir, _make_pool() as pool: - skipped = _push_oci(args, image_ref, specs, tmpdir, pool) + skipped, base_images, checksums = _push_oci( + target_image=target_image, + base_image=base_image, + installed_specs_with_deps=specs, + force=args.force, + tmpdir=tmpdir, + pool=pool, + ) + + # Apart from creating manifests for each individual spec, we allow users to create a + # separate image tag for all root specs and their runtime dependencies. + if args.tag: + tagged_image = target_image.with_tag(args.tag) + # _push_oci may not populate base_images if binaries were already in the registry + for spec in roots: + _update_base_images( + base_image=base_image, + target_image=target_image, + spec=spec, + base_image_cache=base_images, + ) + _put_manifest(base_images, checksums, tagged_image, tmpdir, None, None, *roots) + tty.info(f"Tagged {tagged_image}") + else: skipped = [] @@ -446,11 +478,11 @@ def push_fn(args): # Update the index if requested # TODO: remove update index logic out of bindist; should be once after all specs are pushed # not once per spec. - if image_ref and len(skipped) < len(specs) and args.update_index: + if target_image and len(skipped) < len(specs) and args.update_index: with tempfile.TemporaryDirectory( dir=spack.stage.get_stage_root() ) as tmpdir, _make_pool() as pool: - _update_index_oci(image_ref, tmpdir, pool) + _update_index_oci(target_image, tmpdir, pool) def _get_spack_binary_blob(image_ref: ImageReference) -> Optional[spack.oci.oci.Blob]: @@ -516,17 +548,21 @@ def _archspec_to_gooarch(spec: spack.spec.Spec) -> str: def _put_manifest( base_images: Dict[str, Tuple[dict, dict]], checksums: Dict[str, spack.oci.oci.Blob], - spec: spack.spec.Spec, image_ref: ImageReference, tmpdir: str, + extra_config: Optional[dict], + annotations: Optional[dict], + *specs: spack.spec.Spec, ): - architecture = _archspec_to_gooarch(spec) + architecture = _archspec_to_gooarch(specs[0]) dependencies = list( reversed( list( s - for s in spec.traverse(order="topo", deptype=("link", "run"), root=True) + for s in traverse.traverse_nodes( + specs, order="topo", deptype=("link", "run"), root=True + ) if not s.external ) ) @@ -535,7 +571,7 @@ def _put_manifest( base_manifest, base_config = base_images[architecture] env = _retrieve_env_dict_from_config(base_config) - spack.user_environment.environment_modifications_for_specs(spec).apply_modifications(env) + spack.user_environment.environment_modifications_for_specs(*specs).apply_modifications(env) # Create an oci.image.config file config = copy.deepcopy(base_config) @@ -547,20 +583,14 @@ def _put_manifest( # Set the environment variables config["config"]["Env"] = [f"{k}={v}" for k, v in env.items()] - # From the OCI v1.0 spec: - # > Any extra fields in the Image JSON struct are considered implementation - # > specific and MUST be ignored by any implementations which are unable to - # > interpret them. - # We use this to store the Spack spec, so we can use it to create an index. - spec_dict = spec.to_dict(hash=ht.dag_hash) - spec_dict["buildcache_layout_version"] = 1 - spec_dict["binary_cache_checksum"] = { - "hash_algorithm": "sha256", - "hash": checksums[spec.dag_hash()].compressed_digest.digest, - } - config.update(spec_dict) + if extra_config: + # From the OCI v1.0 spec: + # > Any extra fields in the Image JSON struct are considered implementation + # > specific and MUST be ignored by any implementations which are unable to + # > interpret them. + config.update(extra_config) - config_file = os.path.join(tmpdir, f"{spec.dag_hash()}.config.json") + config_file = os.path.join(tmpdir, f"{specs[0].dag_hash()}.config.json") with open(config_file, "w") as f: json.dump(config, f, separators=(",", ":")) @@ -591,48 +621,69 @@ def _put_manifest( for s in dependencies ), ], - "annotations": {"org.opencontainers.image.description": spec.format()}, } - image_ref_for_spec = image_ref.with_tag(default_tag(spec)) + if annotations: + oci_manifest["annotations"] = annotations # Finally upload the manifest - upload_manifest_with_retry(image_ref_for_spec, oci_manifest=oci_manifest) + upload_manifest_with_retry(image_ref, oci_manifest=oci_manifest) # delete the config file os.unlink(config_file) - return image_ref_for_spec + +def _update_base_images( + *, + base_image: Optional[ImageReference], + target_image: ImageReference, + spec: spack.spec.Spec, + base_image_cache: Dict[str, Tuple[dict, dict]], +): + """For a given spec and base image, copy the missing layers of the base image with matching + arch to the registry of the target image. If no base image is specified, create a dummy + manifest and config file.""" + architecture = _archspec_to_gooarch(spec) + if architecture in base_image_cache: + return + if base_image is None: + base_image_cache[architecture] = ( + default_manifest(), + default_config(architecture, "linux"), + ) + else: + base_image_cache[architecture] = copy_missing_layers_with_retry( + base_image, target_image, architecture + ) def _push_oci( - args, - image_ref: ImageReference, + *, + target_image: ImageReference, + base_image: Optional[ImageReference], installed_specs_with_deps: List[Spec], tmpdir: str, pool: multiprocessing.pool.Pool, -) -> List[str]: + force: bool = False, +) -> Tuple[List[str], Dict[str, Tuple[dict, dict]], Dict[str, spack.oci.oci.Blob]]: """Push specs to an OCI registry Args: - args: The command line arguments. - image_ref: The image reference. + image_ref: The target OCI image + base_image: Optional base image, which will be copied to the target registry. installed_specs_with_deps: The installed specs to push, excluding externals, including deps, ordered from roots to leaves. + force: Whether to overwrite existing layers and manifests in the buildcache. Returns: - List[str]: The list of skipped specs (already in the buildcache). + A tuple consisting of the list of skipped specs already in the build cache, + a dictionary mapping architectures to base image manifests and configs, + and a dictionary mapping each spec's dag hash to a blob. """ # Reverse the order installed_specs_with_deps = list(reversed(installed_specs_with_deps)) - # The base image to use for the package. When not set, we use - # the OCI registry only for storage, and do not use any base image. - base_image_ref: Optional[ImageReference] = ( - ImageReference.from_string(args.base_image) if args.base_image else None - ) - # Spec dag hash -> blob checksums: Dict[str, spack.oci.oci.Blob] = {} @@ -642,11 +693,11 @@ def _push_oci( # Specs not uploaded because they already exist skipped = [] - if not args.force: + if not force: tty.info("Checking for existing specs in the buildcache") to_be_uploaded = [] - tags_to_check = (image_ref.with_tag(default_tag(s)) for s in installed_specs_with_deps) + tags_to_check = (target_image.with_tag(default_tag(s)) for s in installed_specs_with_deps) available_blobs = pool.map(_get_spack_binary_blob, tags_to_check) for spec, maybe_blob in zip(installed_specs_with_deps, available_blobs): @@ -659,46 +710,63 @@ def _push_oci( to_be_uploaded = installed_specs_with_deps if not to_be_uploaded: - return skipped + return skipped, base_images, checksums tty.info( - f"{len(to_be_uploaded)} specs need to be pushed to {image_ref.domain}/{image_ref.name}" + f"{len(to_be_uploaded)} specs need to be pushed to " + f"{target_image.domain}/{target_image.name}" ) # Upload blobs new_blobs = pool.starmap( - _push_single_spack_binary_blob, ((image_ref, spec, tmpdir) for spec in to_be_uploaded) + _push_single_spack_binary_blob, ((target_image, spec, tmpdir) for spec in to_be_uploaded) ) # And update the spec to blob mapping for spec, blob in zip(to_be_uploaded, new_blobs): checksums[spec.dag_hash()] = blob - # Copy base image layers, probably fine to do sequentially. + # Copy base images if necessary for spec in to_be_uploaded: - architecture = _archspec_to_gooarch(spec) - # Get base image details, if we don't have them yet - if architecture in base_images: - continue - if base_image_ref is None: - base_images[architecture] = (default_manifest(), default_config(architecture, "linux")) - else: - base_images[architecture] = copy_missing_layers_with_retry( - base_image_ref, image_ref, architecture - ) + _update_base_images( + base_image=base_image, + target_image=target_image, + spec=spec, + base_image_cache=base_images, + ) + + def extra_config(spec: Spec): + spec_dict = spec.to_dict(hash=ht.dag_hash) + spec_dict["buildcache_layout_version"] = 1 + spec_dict["binary_cache_checksum"] = { + "hash_algorithm": "sha256", + "hash": checksums[spec.dag_hash()].compressed_digest.digest, + } + return spec_dict # Upload manifests tty.info("Uploading manifests") - pushed_image_ref = pool.starmap( + pool.starmap( _put_manifest, - ((base_images, checksums, spec, image_ref, tmpdir) for spec in to_be_uploaded), + ( + ( + base_images, + checksums, + target_image.with_tag(default_tag(spec)), + tmpdir, + extra_config(spec), + {"org.opencontainers.image.description": spec.format()}, + spec, + ) + for spec in to_be_uploaded + ), ) # Print the image names of the top-level specs - for spec, ref in zip(to_be_uploaded, pushed_image_ref): - tty.info(f"Pushed {_format_spec(spec)} to {ref}") + for spec in to_be_uploaded: + tty.info(f"Pushed {_format_spec(spec)} to {target_image.with_tag(default_tag(spec))}") - return skipped + return skipped, base_images, checksums def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]: diff --git a/lib/spack/spack/test/conftest.py b/lib/spack/spack/test/conftest.py index 9d3ef7652d..785d986018 100644 --- a/lib/spack/spack/test/conftest.py +++ b/lib/spack/spack/test/conftest.py @@ -1949,21 +1949,22 @@ def pytest_runtest_setup(item): pytest.skip(*not_on_windows_marker.args) -@pytest.fixture(scope="function") -def disable_parallel_buildcache_push(monkeypatch): - class MockPool: - def map(self, func, args): - return [func(a) for a in args] +class MockPool: + def map(self, func, args): + return [func(a) for a in args] - def starmap(self, func, args): - return [func(*a) for a in args] + def starmap(self, func, args): + return [func(*a) for a in args] - def __enter__(self): - return self + def __enter__(self): + return self - def __exit__(self, *args): - pass + def __exit__(self, *args): + pass + +@pytest.fixture(scope="function") +def disable_parallel_buildcache_push(monkeypatch): monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", MockPool) diff --git a/lib/spack/spack/test/oci/integration_test.py b/lib/spack/spack/test/oci/integration_test.py index b2f9366c3a..5e11132525 100644 --- a/lib/spack/spack/test/oci/integration_test.py +++ b/lib/spack/spack/test/oci/integration_test.py @@ -11,6 +11,7 @@ import json import os from contextlib import contextmanager +import spack.environment as ev import spack.oci.opener from spack.binary_distribution import gzip_compressed_tarfile from spack.main import SpackCommand @@ -20,6 +21,8 @@ from spack.test.oci.mock_registry import DummyServer, InMemoryOCIRegistry, creat buildcache = SpackCommand("buildcache") mirror = SpackCommand("mirror") +env = SpackCommand("env") +install = SpackCommand("install") @contextmanager @@ -53,6 +56,46 @@ def test_buildcache_push_command(mutable_database, disable_parallel_buildcache_p assert os.path.exists(os.path.join(spec.prefix, "bin", "mpileaks")) +def test_buildcache_tag( + install_mockery, mock_fetch, mutable_mock_env_path, disable_parallel_buildcache_push +): + """Tests whether we can create an OCI image from a full environment with multiple roots.""" + env("create", "test") + with ev.read("test"): + install("--add", "libelf") + install("--add", "trivial-install-test-package") + + registry = InMemoryOCIRegistry("example.com") + + with oci_servers(registry): + mirror("add", "oci-test", "oci://example.com/image") + + with ev.read("test"): + buildcache("push", "--tag", "full_env", "oci-test") + + name = ImageReference.from_string("example.com/image:full_env") + + with ev.read("test") as e: + specs = e.all_specs() + + manifest, config = get_manifest_and_config(name) + + # without a base image, we should have one layer per spec + assert len(manifest["layers"]) == len(specs) + + # Now create yet another tag, but with just a single selected spec as root. This should + # also test the case where Spack doesn't have to upload any binaries, it just has to create + # a new tag. + libelf = next(s for s in specs if s.name == "libelf") + with ev.read("test"): + # Get libelf spec + buildcache("push", "--tag", "single_spec", "oci-test", libelf.format("libelf{/hash}")) + + name = ImageReference.from_string("example.com/image:single_spec") + manifest, config = get_manifest_and_config(name) + assert len(manifest["layers"]) == 1 + + def test_buildcache_push_with_base_image_command( mutable_database, tmpdir, disable_parallel_buildcache_push ): |