summaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
authorkwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>2023-03-10 13:25:35 -0600
committerGitHub <noreply@github.com>2023-03-10 12:25:35 -0700
commitf3595da600cb79c16bf706320cbc6047239d4605 (patch)
treea8acd6618a97b931d3e8dfa6eb00f0cf75af8824 /lib
parent16c67ff9b4b94c9ae48c60638c94ec0610e6a082 (diff)
downloadspack-f3595da600cb79c16bf706320cbc6047239d4605.tar.gz
spack-f3595da600cb79c16bf706320cbc6047239d4605.tar.bz2
spack-f3595da600cb79c16bf706320cbc6047239d4605.tar.xz
spack-f3595da600cb79c16bf706320cbc6047239d4605.zip
CI boilerplate reduction (#34272)
* CI configuration boilerplate reduction and refactor Configuration: - New notation for list concatenation (prepend/append) - New notation for string concatenation (prepend/append) - Break out configuration files for: ci.yaml, cdash.yaml, view.yaml - Spack CI section refactored to improve self-consistency and composability - Scripts are now lists of lists and/or lists of strings - Job attributes are now listed under precedence ordered list that are composed/merged using Spack config merge rules. - "service-jobs" are identified explicitly rather than as a batch CI: - Consolidate common, platform, and architecture configurations for all CI stacks into composable configuration files - Make padding consistent across all stacks (256) - Merge all package -> runner mappings to be consistent across all stacks Unit Test: - Refactor CI module unit-tests for refactor configuration Docs: - Add docs for new notations in configuration.rst - Rewrite docs on CI pipelines to be consistent with refactored CI workflow * Script verbose environ, dev bootstrap * Port #35409
Diffstat (limited to 'lib')
-rw-r--r--lib/spack/docs/configuration.rst49
-rw-r--r--lib/spack/docs/pipelines.rst588
-rw-r--r--lib/spack/spack/ci.py449
-rw-r--r--lib/spack/spack/cmd/ci.py42
-rw-r--r--lib/spack/spack/config.py118
-rw-r--r--lib/spack/spack/environment/environment.py1
-rw-r--r--lib/spack/spack/schema/cdash.py3
-rw-r--r--lib/spack/spack/schema/ci.py181
-rw-r--r--lib/spack/spack/schema/merged.py4
-rw-r--r--lib/spack/spack/test/cmd/ci.py389
10 files changed, 1212 insertions, 612 deletions
diff --git a/lib/spack/docs/configuration.rst b/lib/spack/docs/configuration.rst
index 563351dfae..c6355d8373 100644
--- a/lib/spack/docs/configuration.rst
+++ b/lib/spack/docs/configuration.rst
@@ -227,6 +227,9 @@ You can get the name to use for ``<platform>`` by running ``spack arch
--platform``. The system config scope has a ``<platform>`` section for
sites at which ``/etc`` is mounted on multiple heterogeneous machines.
+
+.. _config-scope-precedence:
+
----------------
Scope Precedence
----------------
@@ -239,6 +242,11 @@ lower-precedence settings. Completely ignoring higher-level configuration
options is supported with the ``::`` notation for keys (see
:ref:`config-overrides` below).
+There are also special notations for string concatenation and precendense override.
+Using the ``+:`` notation can be used to force *prepending* strings or lists. For lists, this is identical
+to the default behavior. Using the ``-:`` works similarly, but for *appending* values.
+:ref:`config-prepend-append`
+
^^^^^^^^^^^
Simple keys
^^^^^^^^^^^
@@ -279,6 +287,47 @@ command:
- ~/.spack/stage
+.. _config-prepend-append:
+
+^^^^^^^^^^^^^^^^^^^^
+String Concatenation
+^^^^^^^^^^^^^^^^^^^^
+
+Above, the user ``config.yaml`` *completely* overrides specific settings in the
+default ``config.yaml``. Sometimes, it is useful to add a suffix/prefix
+to a path or name. To do this, you can use the ``-:`` notation for *append*
+string concatentation at the end of a key in a configuration file. For example:
+
+.. code-block:: yaml
+ :emphasize-lines: 1
+ :caption: ~/.spack/config.yaml
+
+ config:
+ install_tree-: /my/custom/suffix/
+
+Spack will then append to the lower-precedence configuration under the
+``install_tree-:`` section:
+
+.. code-block:: console
+
+ $ spack config get config
+ config:
+ install_tree: /some/other/directory/my/custom/suffix
+ build_stage:
+ - $tempdir/$user/spack-stage
+ - ~/.spack/stage
+
+
+Similarly, ``+:`` can be used to *prepend* to a path or name:
+
+.. code-block:: yaml
+ :emphasize-lines: 1
+ :caption: ~/.spack/config.yaml
+
+ config:
+ install_tree+: /my/custom/suffix/
+
+
.. _config-overrides:
^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/lib/spack/docs/pipelines.rst b/lib/spack/docs/pipelines.rst
index 699fca2d1e..5d06accc4a 100644
--- a/lib/spack/docs/pipelines.rst
+++ b/lib/spack/docs/pipelines.rst
@@ -9,27 +9,32 @@
CI Pipelines
============
-Spack provides commands that support generating and running automated build
-pipelines designed for Gitlab CI. At the highest level it works like this:
-provide a spack environment describing the set of packages you care about,
-and include within that environment file a description of how those packages
-should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
-file containing job descriptions for all your packages that can be run by a
-properly configured Gitlab CI instance. When run, the generated pipeline will
-build and deploy binaries, and it can optionally report to a CDash instance
+Spack provides commands that support generating and running automated build pipelines in CI instances. At the highest
+level it works like this: provide a spack environment describing the set of packages you care about, and include a
+description of how those packages should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
+file containing job descriptions for all your packages that can be run by a properly configured CI instance. When
+run, the generated pipeline will build and deploy binaries, and it can optionally report to a CDash instance
regarding the health of the builds as they evolve over time.
------------------------------
Getting started with pipelines
------------------------------
-It is fairly straightforward to get started with automated build pipelines. At
-a minimum, you'll need to set up a Gitlab instance (more about Gitlab CI
-`here <https://about.gitlab.com/product/continuous-integration/>`_) and configure
-at least one `runner <https://docs.gitlab.com/runner/>`_. Then the basic steps
-for setting up a build pipeline are as follows:
+To get started with automated build pipelines a Gitlab instance with version ``>= 12.9``
+(more about Gitlab CI `here <https://about.gitlab.com/product/continuous-integration/>`_)
+with at least one `runner <https://docs.gitlab.com/runner/>`_ configured is required. This
+can be done quickly by setting up a local Gitlab instance.
-#. Create a repository on your gitlab instance
+It is possible to set up pipelines on gitlab.com, but the builds there are limited to
+60 minutes and generic hardware. It is possible to
+`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
+Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
+or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
+topics are outside the scope of this document.
+
+After setting up a Gitlab instance for running CI, the basic steps for setting up a build pipeline are as follows:
+
+#. Create a repository in the Gitlab instance with CI and a runner enabled.
#. Add a ``spack.yaml`` at the root containing your pipeline environment
#. Add a ``.gitlab-ci.yml`` at the root containing two jobs (one to generate
the pipeline dynamically, and one to run the generated jobs).
@@ -40,13 +45,6 @@ See the :ref:`functional_example` section for a minimal working example. See al
the :ref:`custom_Workflow` section for a link to an example of a custom workflow
based on spack pipelines.
-While it is possible to set up pipelines on gitlab.com, as illustrated above, the
-builds there are limited to 60 minutes and generic hardware. It is also possible to
-`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
-Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
-or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
-topics are outside the scope of this document.
-
Spack's pipelines are now making use of the
`trigger <https://docs.gitlab.com/ee/ci/yaml/#trigger>`_ syntax to run
dynamically generated
@@ -132,29 +130,35 @@ And here's the spack environment built by the pipeline represented as a
mirrors: { "mirror": "s3://spack-public/mirror" }
- gitlab-ci:
- before_script:
- - git clone ${SPACK_REPO}
- - pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
- - . "./spack/share/spack/setup-env.sh"
- script:
- - pushd ${SPACK_CONCRETE_ENV_DIR} && spack env activate --without-view . && popd
- - spack -d ci rebuild
- mappings:
- - match: ["os=ubuntu18.04"]
- runner-attributes:
- image:
- name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
- entrypoint: [""]
- tags:
- - docker
+ ci:
enable-artifacts-buildcache: True
rebuild-index: False
+ pipeline-gen:
+ - any-job:
+ before_script:
+ - git clone ${SPACK_REPO}
+ - pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
+ - . "./spack/share/spack/setup-env.sh"
+ - build-job:
+ tags: [docker]
+ image:
+ name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
+ entrypoint: [""]
+
The elements of this file important to spack ci pipelines are described in more
detail below, but there are a couple of things to note about the above working
example:
+.. note::
+ There is no ``script`` attribute specified for here. The reason for this is
+ Spack CI will automatically generate reasonable default scripts. More
+ detail on what is in these scripts can be found below.
+
+ Also notice the ``before_script`` section. It is required when using any of the
+ default scripts to source the ``setup-env.sh`` script in order to inform
+ the default scripts where to find the ``spack`` executable.
+
Normally ``enable-artifacts-buildcache`` is not recommended in production as it
results in large binary artifacts getting transferred back and forth between
gitlab and the runners. But in this example on gitlab.com where there is no
@@ -174,7 +178,7 @@ during subsequent pipeline runs.
With the addition of reproducible builds (#22887) a previously working
pipeline will require some changes:
- * In the build jobs (``runner-attributes``), the environment location changed.
+ * In the build-jobs, the environment location changed.
This will typically show as a ``KeyError`` in the failing job. Be sure to
point to ``${SPACK_CONCRETE_ENV_DIR}``.
@@ -196,9 +200,9 @@ ci pipelines. These commands are covered in more detail in this section.
.. _cmd-spack-ci:
-^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^
``spack ci``
-^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^
Super-command for functionality related to generating pipelines and executing
pipeline jobs.
@@ -227,7 +231,7 @@ Using ``--prune-dag`` or ``--no-prune-dag`` configures whether or not jobs are
generated for specs that are already up to date on the mirror. If enabling
DAG pruning using ``--prune-dag``, more information may be required in your
``spack.yaml`` file, see the :ref:`noop_jobs` section below regarding
-``service-job-attributes``.
+``noop-job``.
The optional ``--check-index-only`` argument can be used to speed up pipeline
generation by telling spack to consider only remote buildcache indices when
@@ -263,11 +267,11 @@ generated by jobs in the pipeline.
.. _cmd-spack-ci-rebuild:
-^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^
``spack ci rebuild``
-^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^
-The purpose of ``spack ci rebuild`` is straightforward: take its assigned
+The purpose of ``spack ci rebuild`` is to take an assigned
spec and ensure a binary of a successful build exists on the target mirror.
If the binary does not already exist, it is built from source and pushed
to the mirror. The associated stand-alone tests are optionally run against
@@ -280,7 +284,7 @@ directory. The script is run in a job to install the spec from source. The
resulting binary package is pushed to the mirror. If ``cdash`` is configured
for the environment, then the build results will be uploaded to the site.
-Environment variables and values in the ``gitlab-ci`` section of the
+Environment variables and values in the ``ci::pipeline-gen`` section of the
``spack.yaml`` environment file provide inputs to this process. The
two main sources of environment variables are variables written into
``.gitlab-ci.yml`` by ``spack ci generate`` and the GitLab CI runtime.
@@ -298,21 +302,23 @@ A snippet from an example ``spack.yaml`` file illustrating use of this
option *and* specification of a package with broken tests is given below.
The inclusion of a spec for building ``gptune`` is not shown here. Note
that ``--tests`` is passed to ``spack ci rebuild`` as part of the
-``gitlab-ci`` script.
+``build-job`` script.
.. code-block:: yaml
- gitlab-ci:
- script:
- - . "./share/spack/setup-env.sh"
- - spack --version
- - cd ${SPACK_CONCRETE_ENV_DIR}
- - spack env activate --without-view .
- - spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- - mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- - if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- - if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- - spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
+ ci:
+ pipeline-gen:
+ - build-job
+ script:
+ - . "./share/spack/setup-env.sh"
+ - spack --version
+ - cd ${SPACK_CONCRETE_ENV_DIR}
+ - spack env activate --without-view .
+ - spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
+ - mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
+ - if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
+ - if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
+ - spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
broken-tests-packages:
- gptune
@@ -354,113 +360,31 @@ arguments you can pass to ``spack ci reproduce-build`` in order to reproduce
a particular build locally.
------------------------------------
-A pipeline-enabled spack environment
+Job Types
------------------------------------
-Here's an example of a spack environment file that has been enhanced with
-sections describing a build pipeline:
+^^^^^^^^^^^^^^^
+Rebuild (build)
+^^^^^^^^^^^^^^^
-.. code-block:: yaml
+Rebuild jobs, denoted as ``build-job``'s in the ``pipeline-gen`` list, are jobs
+associated with concrete specs that have been marked for rebuild. By default a simple
+script for doing rebuild is generated, but may be modified as needed.
- spack:
- definitions:
- - pkgs:
- - readline@7.0
- - compilers:
- - '%gcc@5.5.0'
- - oses:
- - os=ubuntu18.04
- - os=centos7
- specs:
- - matrix:
- - [$pkgs]
- - [$compilers]
- - [$oses]
- mirrors:
- cloud_gitlab: https://mirror.spack.io
- gitlab-ci:
- mappings:
- - match:
- - os=ubuntu18.04
- runner-attributes:
- tags:
- - spack-kube
- image: spack/ubuntu-bionic
- - match:
- - os=centos7
- runner-attributes:
- tags:
- - spack-kube
- image: spack/centos7
- cdash:
- build-group: Release Testing
- url: https://cdash.spack.io
- project: Spack
- site: Spack AWS Gitlab Instance
-
-Hopefully, the ``definitions``, ``specs``, ``mirrors``, etc. sections are already
-familiar, as they are part of spack :ref:`environments`. So let's take a more
-in-depth look some of the pipeline-related sections in that environment file
-that might not be as familiar.
-
-The ``gitlab-ci`` section is used to configure how the pipeline workload should be
-generated, mainly how the jobs for building specs should be assigned to the
-configured runners on your instance. Each entry within the list of ``mappings``
-corresponds to a known gitlab runner, where the ``match`` section is used
-in assigning a release spec to one of the runners, and the ``runner-attributes``
-section is used to configure the spec/job for that particular runner.
-
-Both the top-level ``gitlab-ci`` section as well as each ``runner-attributes``
-section can also contain the following keys: ``image``, ``tags``, ``variables``,
-``before_script``, ``script``, and ``after_script``. If any of these keys are
-provided at the ``gitlab-ci`` level, they will be used as the defaults for any
-``runner-attributes``, unless they are overridden in those sections. Specifying
-any of these keys at the ``runner-attributes`` level generally overrides the
-keys specified at the higher level, with a couple exceptions. Any ``variables``
-specified at both levels result in those dictionaries getting merged in the
-resulting generated job, and any duplicate variable names get assigned the value
-provided in the specific ``runner-attributes``. If ``tags`` are specified both
-at the ``gitlab-ci`` level as well as the ``runner-attributes`` level, then the
-lists of tags are combined, and any duplicates are removed.
-
-See the section below on using a custom spack for an example of how these keys
-could be used.
-
-There are other pipeline options you can configure within the ``gitlab-ci`` section
-as well.
+The default script does three main steps, change directories to the pipelines concrete
+environment, activate the concrete environment, and run the ``spack ci rebuild`` command:
-The ``bootstrap`` section allows you to specify lists of specs from
-your ``definitions`` that should be staged ahead of the environment's ``specs`` (this
-section is described in more detail below). The ``enable-artifacts-buildcache`` key
-takes a boolean and determines whether the pipeline uses artifacts to store and
-pass along the buildcaches from one stage to the next (the default if you don't
-provide this option is ``False``).
+.. code-block:: bash
-The optional ``broken-specs-url`` key tells Spack to check against a list of
-specs that are known to be currently broken in ``develop``. If any such specs
-are found, the ``spack ci generate`` command will fail with an error message
-informing the user what broken specs were encountered. This allows the pipeline
-to fail early and avoid wasting compute resources attempting to build packages
-that will not succeed.
-
-The optional ``cdash`` section provides information that will be used by the
-``spack ci generate`` command (invoked by ``spack ci start``) for reporting
-to CDash. All the jobs generated from this environment will belong to a
-"build group" within CDash that can be tracked over time. As the release
-progresses, this build group may have jobs added or removed. The url, project,
-and site are used to specify the CDash instance to which build results should
-be reported.
-
-Take a look at the
-`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/gitlab_ci.py>`_
-for the gitlab-ci section of the spack environment file, to see precisely what
-syntax is allowed there.
+ cd ${concrete_environment_dir}
+ spack env activate --without-view .
+ spack ci rebuild
.. _rebuild_index:
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Note about rebuilding buildcache index
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^^^
+Update Index (reindex)
+^^^^^^^^^^^^^^^^^^^^^^
By default, while a pipeline job may rebuild a package, create a buildcache
entry, and push it to the mirror, it does not automatically re-generate the
@@ -475,21 +399,44 @@ not correctly reflect the mirror's contents at the end of a pipeline.
To make sure the buildcache index is up to date at the end of your pipeline,
spack generates a job to update the buildcache index of the target mirror
at the end of each pipeline by default. You can disable this behavior by
-adding ``rebuild-index: False`` inside the ``gitlab-ci`` section of your
-spack environment. Spack will assign the job any runner attributes found
-on the ``service-job-attributes``, if you have provided that in your
-``spack.yaml``.
+adding ``rebuild-index: False`` inside the ``ci`` section of your
+spack environment.
+
+Reindex jobs do not allow modifying the ``script`` attribute since it is automatically
+generated using the target mirror listed in the ``mirrors::mirror`` configuration.
+
+^^^^^^^^^^^^^^^^^
+Signing (signing)
+^^^^^^^^^^^^^^^^^
+
+This job is run after all of the rebuild jobs are completed and is intended to be used
+to sign the package binaries built by a protected CI run. Signing jobs are generated
+only if a signing job ``script`` is specified and the spack CI job type is protected.
+Note, if an ``any-job`` section contains a script, this will not implicitly create a
+``signing`` job, a signing job may only exist if it is explicitly specified in the
+configuration with a ``script`` attribute. Specifying a signing job without a script
+does not create a signing job and the job configuration attributes will be ignored.
+Signing jobs are always assigned the runner tags ``aws``, ``protected``, and ``notary``.
+
+^^^^^^^^^^^^^^^^^
+Cleanup (cleanup)
+^^^^^^^^^^^^^^^^^
+
+When using ``temporary-storage-url-prefix`` the cleanup job will destroy the mirror
+created for the associated Gitlab pipeline. Cleanup jobs do not allow modifying the
+script, but do expect that the spack command is in the path and require a
+``before_script`` to be specified that sources the ``setup-env.sh`` script.
.. _noop_jobs:
-^^^^^^^^^^^^^^^^^^^^^^^
-Note about "no-op" jobs
-^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^
+No Op (noop)
+^^^^^^^^^^^^
If no specs in an environment need to be rebuilt during a given pipeline run
(meaning all are already up to date on the mirror), a single successful job
(a NO-OP) is still generated to avoid an empty pipeline (which GitLab
-considers to be an error). An optional ``service-job-attributes`` section
+considers to be an error). The ``noop-job*`` sections
can be added to your ``spack.yaml`` where you can provide ``tags`` and
``image`` or ``variables`` for the generated NO-OP job. This section also
supports providing ``before_script``, ``script``, and ``after_script``, in
@@ -499,51 +446,100 @@ Following is an example of this section added to a ``spack.yaml``:
.. code-block:: yaml
- spack:
- specs:
- - openmpi
- mirrors:
- cloud_gitlab: https://mirror.spack.io
- gitlab-ci:
- mappings:
- - match:
- - os=centos8
- runner-attributes:
- tags:
- - custom
- - tag
- image: spack/centos7
- service-job-attributes:
- tags: ['custom', 'tag']
- image:
- name: 'some.image.registry/custom-image:latest'
- entrypoint: ['/bin/bash']
- script:
- - echo "Custom message in a custom script"
+ spack:
+ ci:
+ pipeline-gen:
+ - noop-job:
+ tags: ['custom', 'tag']
+ image:
+ name: 'some.image.registry/custom-image:latest'
+ entrypoint: ['/bin/bash']
+ script::
+ - echo "Custom message in a custom script"
The example above illustrates how you can provide the attributes used to run
the NO-OP job in the case of an empty pipeline. The only field for the NO-OP
job that might be generated for you is ``script``, but that will only happen
-if you do not provide one yourself.
-
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Assignment of specs to runners
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ``mappings`` section corresponds to a list of runners, and during assignment
-of specs to runners, the list is traversed in order looking for matches, the
-first runner that matches a release spec is assigned to build that spec. The
-``match`` section within each runner mapping section is a list of specs, and
-if any of those specs match the release spec (the ``spec.satisfies()`` method
-is used), then that runner is considered a match.
-
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Configuration of specs/jobs for a runner
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Once a runner has been chosen to build a release spec, the ``runner-attributes``
-section provides information determining details of the job in the context of
-the runner. The ``runner-attributes`` section must have a ``tags`` key, which
+if you do not provide one yourself. Notice in this example the ``script``
+uses the ``::`` notation to prescribe override behavior. Without this, the
+``echo`` command would have been prepended to the automatically generated script
+rather than replacing it.
+
+------------------------------------
+ci.yaml
+------------------------------------
+
+Here's an example of a spack configuration file describing a build pipeline:
+
+.. code-block:: yaml
+
+ ci:
+ target: gitlab
+
+ rebuild_index: True
+
+ broken-specs-url: https://broken.specs.url
+
+ broken-tests-packages:
+ - gptune
+
+ pipeline-gen:
+ - submapping:
+ - match:
+ - os=ubuntu18.04
+ build-job:
+ tags:
+ - spack-kube
+ image: spack/ubuntu-bionic
+ - match:
+ - os=centos7
+ build-job:
+ tags:
+ - spack-kube
+ image: spack/centos7
+
+ cdash:
+ build-group: Release Testing
+ url: https://cdash.spack.io
+ project: Spack
+ site: Spack AWS Gitlab Instance
+
+The ``ci`` config section is used to configure how the pipeline workload should be
+generated, mainly how the jobs for building specs should be assigned to the
+configured runners on your instance. The main section for configuring pipelines
+is ``pipeline-gen``, which is a list of job attribute sections that are merged,
+using the same rules as Spack configs (:ref:`config-scope-precedence`), from the bottom up.
+The order sections are applied is to be consistent with how spack orders scope precedence when merging lists.
+There are two main section types, ``<type>-job`` sections and ``submapping``
+sections.
+
+
+^^^^^^^^^^^^^^^^^^^^^^
+Job Attribute Sections
+^^^^^^^^^^^^^^^^^^^^^^
+
+Each type of job may have attributes added or removed via sections in the ``pipeline-gen``
+list. Job type specific attributes may be specified using the keys ``<type>-job`` to
+add attributes to all jobs of type ``<type>`` or ``<type>-job-remove`` to remove attributes
+of type ``<type>``. Each section may only contain one type of job attribute specification, ie. ,
+``build-job`` and ``noop-job`` may not coexist but ``build-job`` and ``build-job-remove`` may.
+
+.. note::
+ The ``*-remove`` specifications are applied before the additive attribute specification.
+ For example, in the case where both ``build-job`` and ``build-job-remove`` are listed in
+ the same ``pipeline-gen`` section, the value will still exist in the merged build-job after
+ applying the section.
+
+All of the attributes specified are forwarded to the generated CI jobs, however special
+treatment is applied to the attributes ``tags``, ``image``, ``variables``, ``script``,
+``before_script``, and ``after_script`` as they are components recognized explicitly by the
+Spack CI generator. For the ``tags`` attribute, Spack will remove reserved tags
+(:ref:`reserved_tags`) from all jobs specified in the config. In some cases, such as for
+``signing`` jobs, reserved tags will be added back based on the type of CI that is being run.
+
+Once a runner has been chosen to build a release spec, the ``build-job*``
+sections provide information determining details of the job in the context of
+the runner. At lease one of the ``build-job*`` sections must contain a ``tags`` key, which
is a list containing at least one tag used to select the runner from among the
runners known to the gitlab instance. For Docker executor type runners, the
``image`` key is used to specify the Docker image used to build the release spec
@@ -554,7 +550,7 @@ information on to the runner that it needs to do its work (e.g. scheduler
parameters, etc.). Any ``variables`` provided here will be added, verbatim, to
each job.
-The ``runner-attributes`` section also allows users to supply custom ``script``,
+The ``build-job`` section also allows users to supply custom ``script``,
``before_script``, and ``after_script`` sections to be applied to every job
scheduled on that runner. This allows users to do any custom preparation or
cleanup tasks that fit their particular workflow, as well as completely
@@ -565,46 +561,45 @@ environment directory is located within your ``--artifacts_root`` (or if not
provided, within your ``$CI_PROJECT_DIR``), activates that environment for
you, and invokes ``spack ci rebuild``.
-.. _staging_algorithm:
+Sections that specify scripts (``script``, ``before_script``, ``after_script``) are all
+read as lists of commands or lists of lists of commands. It is recommended to write scripts
+as lists of lists if scripts will be composed via merging. The default behavior of merging
+lists will remove duplicate commands and potentially apply unwanted reordering, whereas
+merging lists of lists will preserve the local ordering and never removes duplicate
+commands. When writing commands to the CI target script, all lists are expanded and
+flattened into a single list.
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Summary of ``.gitlab-ci.yml`` generation algorithm
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^
+Submapping Sections
+^^^^^^^^^^^^^^^^^^^
-All specs yielded by the matrix (or all the specs in the environment) have their
-dependencies computed, and the entire resulting set of specs are staged together
-before being run through the ``gitlab-ci/mappings`` entries, where each staged
-spec is assigned a runner. "Staging" is the name given to the process of
-figuring out in what order the specs should be built, taking into consideration
-Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
-the number of jobs in any stage of the pipeline, while ensuring that the jobs in
-any stage only depend on jobs in previous stages (since those jobs are guaranteed
-to have completed already). As a runner is determined for a job, the information
-in the ``runner-attributes`` is used to populate various parts of the job
-description that will be used by Gitlab CI. Once all the jobs have been assigned
-a runner, the ``.gitlab-ci.yml`` is written to disk.
+A special case of attribute specification is the ``submapping`` section which may be used
+to apply job attributes to build jobs based on the package spec associated with the rebuild
+job. Submapping is specified as a list of spec ``match`` lists associated with
+``build-job``/``build-job-remove`` sections. There are two options for ``match_behavior``,
+either ``first`` or ``merge`` may be specified. In either case, the ``submapping`` list is
+processed from the bottom up, and then each ``match`` list is searched for a string that
+satisfies the check ``spec.satisfies({match_item})`` for each concrete spec.
-The short example provided above would result in the ``readline``, ``ncurses``,
-and ``pkgconf`` packages getting staged and built on the runner chosen by the
-``spack-k8s`` tag. In this example, spack assumes the runner is a Docker executor
-type runner, and thus certain jobs will be run in the ``centos7`` container,
-and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
-will contain 6 jobs in three stages. Once the jobs have been generated, the
-presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
-``spack ci generate`` command would result in all of the jobs being put in a
-build group on CDash called "Release Testing" (that group will be created if
-it didn't already exist).
+The the case of ``match_behavior: first``, the first ``match`` section in the list of
+``submappings`` that contains a string that satisfies the spec will apply it's
+``build-job*`` attributes to the rebuild job associated with that spec. This is the
+default behavior and will be the method if no ``match_behavior`` is specified.
+
+The the case of ``merge`` match, all of the ``match`` sections in the list of
+``submappings`` that contain a string that satisfies the spec will have the associated
+``build-job*`` attributes applied to the rebuild job associated with that spec. Again,
+the attributes will be merged starting from the bottom match going up to the top match.
+
+In the case that no match is found in a submapping section, no additional attributes will be applied.
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Optional compiler bootstrapping
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^
+Bootstrapping
+^^^^^^^^^^^^^
-Spack pipelines also have support for bootstrapping compilers on systems that
-may not already have the desired compilers installed. The idea here is that
-you can specify a list of things to bootstrap in your ``definitions``, and
-spack will guarantee those will be installed in a phase of the pipeline before
-your release specs, so that you can rely on those packages being available in
-the binary mirror when you need them later on in the pipeline. At the moment
+
+The ``bootstrap`` section allows you to specify lists of specs from
+your ``definitions`` that should be staged ahead of the environment's ``specs``. At the moment
the only viable use-case for bootstrapping is to install compilers.
Here's an example of what bootstrapping some compilers might look like:
@@ -680,6 +675,86 @@ environment/stack file, and in that case no bootstrapping will be done (only the
specs will be staged for building) and the runners will be expected to already
have all needed compilers installed and configured for spack to use.
+^^^^^^^^^^^^^^^^^^^
+Pipeline Buildcache
+^^^^^^^^^^^^^^^^^^^
+
+The ``enable-artifacts-buildcache`` key
+takes a boolean and determines whether the pipeline uses artifacts to store and
+pass along the buildcaches from one stage to the next (the default if you don't
+provide this option is ``False``).
+
+^^^^^^^^^^^^^^^^
+Broken Specs URL
+^^^^^^^^^^^^^^^^
+
+The optional ``broken-specs-url`` key tells Spack to check against a list of
+specs that are known to be currently broken in ``develop``. If any such specs
+are found, the ``spack ci generate`` command will fail with an error message
+informing the user what broken specs were encountered. This allows the pipeline
+to fail early and avoid wasting compute resources attempting to build packages
+that will not succeed.
+
+^^^^^
+CDash
+^^^^^
+
+The optional ``cdash`` section provides information that will be used by the
+``spack ci generate`` command (invoked by ``spack ci start``) for reporting
+to CDash. All the jobs generated from this environment will belong to a
+"build group" within CDash that can be tracked over time. As the release
+progresses, this build group may have jobs added or removed. The url, project,
+and site are used to specify the CDash instance to which build results should
+be reported.
+
+Take a look at the
+`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/ci.py>`_
+for the gitlab-ci section of the spack environment file, to see precisely what
+syntax is allowed there.
+
+.. _reserved_tags:
+
+^^^^^^^^^^^^^
+Reserved Tags
+^^^^^^^^^^^^^
+
+Spack has a subset of tags (``public``, ``protected``, and ``notary``) that it reserves
+for classifying runners that may require special permissions or access. The tags
+``public`` and ``protected`` are used to distinguish between runners that use public
+permissions and runners with protected permissions. The ``notary`` tag is a special tag
+that is used to indicate runners that have access to the highly protected information
+used for signing binaries using the ``signing`` job.
+
+.. _staging_algorithm:
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Summary of ``.gitlab-ci.yml`` generation algorithm
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+All specs yielded by the matrix (or all the specs in the environment) have their
+dependencies computed, and the entire resulting set of specs are staged together
+before being run through the ``ci/pipeline-gen`` entries, where each staged
+spec is assigned a runner. "Staging" is the name given to the process of
+figuring out in what order the specs should be built, taking into consideration
+Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
+the number of jobs in any stage of the pipeline, while ensuring that the jobs in
+any stage only depend on jobs in previous stages (since those jobs are guaranteed
+to have completed already). As a runner is determined for a job, the information
+in the merged ``any-job*`` and ``build-job*`` sections is used to populate various parts of the job
+description that will be used by the target CI pipelines. Once all the jobs have been assigned
+a runner, the ``.gitlab-ci.yml`` is written to disk.
+
+The short example provided above would result in the ``readline``, ``ncurses``,
+and ``pkgconf`` packages getting staged and built on the runner chosen by the
+``spack-k8s`` tag. In this example, spack assumes the runner is a Docker executor
+type runner, and thus certain jobs will be run in the ``centos7`` container,
+and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
+will contain 6 jobs in three stages. Once the jobs have been generated, the
+presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
+``spack ci generate`` command would result in all of the jobs being put in a
+build group on CDash called "Release Testing" (that group will be created if
+it didn't already exist).
+
-------------------------------------
Using a custom spack in your pipeline
-------------------------------------
@@ -726,23 +801,21 @@ generated by ``spack ci generate``. You also want your generated rebuild jobs
spack:
...
- gitlab-ci:
- mappings:
- - match:
- - os=ubuntu18.04
- runner-attributes:
- tags:
- - spack-kube
- image: spack/ubuntu-bionic
- before_script:
- - git clone ${SPACK_REPO}
- - pushd spack && git checkout ${SPACK_REF} && popd
- - . "./spack/share/spack/setup-env.sh"
- script:
- - spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
- - spack -d ci rebuild
- after_script:
- - rm -rf ./spack
+ ci:
+ pipeline-gen:
+ - build-job:
+ tags:
+ - spack-kube
+ image: spack/ubuntu-bionic
+ before_script:
+ - git clone ${SPACK_REPO}
+ - pushd spack && git checkout ${SPACK_REF} && popd
+ - . "./spack/share/spack/setup-env.sh"
+ script:
+ - spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
+ - spack -d ci rebuild
+ after_script:
+ - rm -rf ./spack
Now all of the generated rebuild jobs will use the same shell script to clone
spack before running their actual workload.
@@ -831,3 +904,4 @@ verify binary packages (when installing or creating buildcaches). You could
also have already trusted a key spack know about, or if no key is present anywhere,
spack will install specs using ``--no-check-signature`` and create buildcaches
using ``-u`` (for unsigned binaries).
+
diff --git a/lib/spack/spack/ci.py b/lib/spack/spack/ci.py
index 8baea51d75..694435f66f 100644
--- a/lib/spack/spack/ci.py
+++ b/lib/spack/spack/ci.py
@@ -364,59 +364,6 @@ def _spec_matches(spec, match_string):
return spec.intersects(match_string)
-def _remove_attributes(src_dict, dest_dict):
- if "tags" in src_dict and "tags" in dest_dict:
- # For 'tags', we remove any tags that are listed for removal
- for tag in src_dict["tags"]:
- while tag in dest_dict["tags"]:
- dest_dict["tags"].remove(tag)
-
-
-def _copy_attributes(attrs_list, src_dict, dest_dict):
- for runner_attr in attrs_list:
- if runner_attr in src_dict:
- if runner_attr in dest_dict and runner_attr == "tags":
- # For 'tags', we combine the lists of tags, while
- # avoiding duplicates
- for tag in src_dict[runner_attr]:
- if tag not in dest_dict[runner_attr]:
- dest_dict[runner_attr].append(tag)
- elif runner_attr in dest_dict and runner_attr == "variables":
- # For 'variables', we merge the dictionaries. Any conflicts
- # (i.e. 'runner-attributes' has same variable key as the
- # higher level) we resolve by keeping the more specific
- # 'runner-attributes' version.
- for src_key, src_val in src_dict[runner_attr].items():
- dest_dict[runner_attr][src_key] = copy.deepcopy(src_dict[runner_attr][src_key])
- else:
- dest_dict[runner_attr] = copy.deepcopy(src_dict[runner_attr])
-
-
-def _find_matching_config(spec, gitlab_ci):
- runner_attributes = {}
- overridable_attrs = ["image", "tags", "variables", "before_script", "script", "after_script"]
-
- _copy_attributes(overridable_attrs, gitlab_ci, runner_attributes)
-
- matched = False
- only_first = gitlab_ci.get("match_behavior", "first") == "first"
- for ci_mapping in gitlab_ci["mappings"]:
- for match_string in ci_mapping["match"]:
- if _spec_matches(spec, match_string):
- matched = True
- if "remove-attributes" in ci_mapping:
- _remove_attributes(ci_mapping["remove-attributes"], runner_attributes)
- if "runner-attributes" in ci_mapping:
- _copy_attributes(
- overridable_attrs, ci_mapping["runner-attributes"], runner_attributes
- )
- break
- if matched and only_first:
- break
-
- return runner_attributes if matched else None
-
-
def _format_job_needs(
phase_name,
strip_compilers,
@@ -536,6 +483,224 @@ def get_spec_filter_list(env, affected_pkgs, dependent_traverse_depth=None):
return affected_specs
+def _build_jobs(phases, staged_phases):
+ for phase in phases:
+ phase_name = phase["name"]
+ spec_labels, dependencies, stages = staged_phases[phase_name]
+
+ for stage_jobs in stages:
+ for spec_label in stage_jobs:
+ spec_record = spec_labels[spec_label]
+ release_spec = spec_record["spec"]
+ release_spec_dag_hash = release_spec.dag_hash()
+ yield release_spec, release_spec_dag_hash
+
+
+def _noop(x):
+ return x
+
+
+def _unpack_script(script_section, op=_noop):
+ script = []
+ for cmd in script_section:
+ if isinstance(cmd, list):
+ for subcmd in cmd:
+ script.append(op(subcmd))
+ else:
+ script.append(op(cmd))
+
+ return script
+
+
+class SpackCI:
+ """Spack CI object used to generate intermediate representation
+ used by the CI generator(s).
+ """
+
+ def __init__(self, ci_config, phases, staged_phases):
+ """Given the information from the ci section of the config
+ and the job phases setup meta data needed for generating Spack
+ CI IR.
+ """
+
+ self.ci_config = ci_config
+ self.named_jobs = ["any", "build", "cleanup", "noop", "reindex", "signing"]
+
+ self.ir = {
+ "jobs": {},
+ "temporary-storage-url-prefix": self.ci_config.get(
+ "temporary-storage-url-prefix", None
+ ),
+ "enable-artifacts-buildcache": self.ci_config.get(
+ "enable-artifacts-buildcache", False
+ ),
+ "bootstrap": self.ci_config.get(
+ "bootstrap", []
+ ), # This is deprecated and should be removed
+ "rebuild-index": self.ci_config.get("rebuild-index", True),
+ "broken-specs-url": self.ci_config.get("broken-specs-url", None),
+ "broken-tests-packages": self.ci_config.get("broken-tests-packages", []),
+ "target": self.ci_config.get("target", "gitlab"),
+ }
+ jobs = self.ir["jobs"]
+
+ for spec, dag_hash in _build_jobs(phases, staged_phases):
+ jobs[dag_hash] = self.__init_job(spec)
+
+ for name in self.named_jobs:
+ # Skip the special named jobs
+ if name not in ["any", "build"]:
+ jobs[name] = self.__init_job("")
+
+ def __init_job(self, spec):
+ """Initialize job object"""
+ return {"spec": spec, "attributes": {}}
+
+ def __is_named(self, section):
+ """Check if a pipeline-gen configuration section is for a named job,
+ and if so return the name otherwise return none.
+ """
+ for _name in self.named_jobs:
+ keys = ["{0}-job".format(_name), "{0}-job-remove".format(_name)]
+ if any([key for key in keys if key in section]):
+ return _name
+
+ return None
+
+ @staticmethod
+ def __job_name(name, suffix=""):
+ """Compute the name of a named job with appropriate suffix.
+ Valid suffixes are either '-remove' or empty string or None
+ """
+ assert type(name) == str
+
+ jname = name
+ if suffix:
+ jname = "{0}-job{1}".format(name, suffix)
+ else:
+ jname = "{0}-job".format(name)
+
+ return jname
+
+ def __apply_submapping(self, dest, spec, section):
+ """Apply submapping setion to the IR dict"""
+ matched = False
+ only_first = section.get("match_behavior", "first") == "first"
+
+ for match_attrs in reversed(section["submapping"]):
+ attrs = cfg.InternalConfigScope._process_dict_keyname_overrides(match_attrs)
+ for match_string in match_attrs["match"]:
+ if _spec_matches(spec, match_string):
+ matched = True
+ if "build-job-remove" in match_attrs:
+ spack.config.remove_yaml(dest, attrs["build-job-remove"])
+ if "build-job" in match_attrs:
+ spack.config.merge_yaml(dest, attrs["build-job"])
+ break
+ if matched and only_first:
+ break
+
+ return dest
+
+ # Generate IR from the configs
+ def generate_ir(self):
+ """Generate the IR from the Spack CI configurations."""
+
+ jobs = self.ir["jobs"]
+
+ # Implicit job defaults
+ defaults = [
+ {
+ "build-job": {
+ "script": [
+ "cd {env_dir}",
+ "spack env activate --without-view .",
+ "spack ci rebuild",
+ ]
+ }
+ },
+ {"noop-job": {"script": ['echo "All specs already up to date, nothing to rebuild."']}},
+ ]
+
+ # Job overrides
+ overrides = [
+ # Reindex script
+ {
+ "reindex-job": {
+ "script:": [
+ "spack buildcache update-index --keys --mirror-url {index_target_mirror}"
+ ]
+ }
+ },
+ # Cleanup script
+ {
+ "cleanup-job": {
+ "script:": [
+ "spack -d mirror destroy --mirror-url {mirror_prefix}/$CI_PIPELINE_ID"
+ ]
+ }
+ },
+ # Add signing job tags
+ {"signing-job": {"tags": ["aws", "protected", "notary"]}},
+ # Remove reserved tags
+ {"any-job-remove": {"tags": SPACK_RESERVED_TAGS}},
+ ]
+
+ pipeline_gen = overrides + self.ci_config.get("pipeline-gen", []) + defaults
+
+ for section in reversed(pipeline_gen):
+ name = self.__is_named(section)
+ has_submapping = "submapping" in section
+ section = cfg.InternalConfigScope._process_dict_keyname_overrides(section)
+
+ if name:
+ remove_job_name = self.__job_name(name, suffix="-remove")
+ merge_job_name = self.__job_name(name)
+ do_remove = remove_job_name in section
+ do_merge = merge_job_name in section
+
+ def _apply_section(dest, src):
+ if do_remove:
+ dest = spack.config.remove_yaml(dest, src[remove_job_name])
+ if do_merge:
+ dest = copy.copy(spack.config.merge_yaml(dest, src[merge_job_name]))
+
+ if name == "build":
+ # Apply attributes to all build jobs
+ for _, job in jobs.items():
+ if job["spec"]:
+ _apply_section(job["attributes"], section)
+ elif name == "any":
+ # Apply section attributes too all jobs
+ for _, job in jobs.items():
+ _apply_section(job["attributes"], section)
+ else:
+ # Create a signing job if there is script and the job hasn't
+ # been initialized yet
+ if name == "signing" and name not in jobs:
+ if "signing-job" in section:
+ if "script" not in section["signing-job"]:
+ continue
+ else:
+ jobs[name] = self.__init_job("")
+ # Apply attributes to named job
+ _apply_section(jobs[name]["attributes"], section)
+
+ elif has_submapping:
+ # Apply section jobs with specs to match
+ for _, job in jobs.items():
+ if job["spec"]:
+ job["attributes"] = self.__apply_submapping(
+ job["attributes"], job["spec"], section
+ )
+
+ for _, job in jobs.items():
+ if job["spec"]:
+ job["spec"] = job["spec"].name
+
+ return self.ir
+
+
def generate_gitlab_ci_yaml(
env,
print_summary,
@@ -585,12 +750,18 @@ def generate_gitlab_ci_yaml(
yaml_root = ev.config_dict(env.yaml)
- if "gitlab-ci" not in yaml_root:
- tty.die('Environment yaml does not have "gitlab-ci" section')
+ # Get the joined "ci" config with all of the current scopes resolved
+ ci_config = cfg.get("ci")
- gitlab_ci = yaml_root["gitlab-ci"]
+ if not ci_config:
+ tty.die('Environment yaml does not have "ci" section')
- cdash_handler = CDashHandler(yaml_root.get("cdash")) if "cdash" in yaml_root else None
+ # Default target is gitlab...and only target is gitlab
+ if "target" in ci_config and ci_config["target"] != "gitlab":
+ tty.die('Spack CI module only generates target "gitlab"')
+
+ cdash_config = cfg.get("cdash")
+ cdash_handler = CDashHandler(cdash_config) if "build-group" in cdash_config else None
build_group = cdash_handler.build_group if cdash_handler else None
dependent_depth = os.environ.get("SPACK_PRUNE_UNTOUCHED_DEPENDENT_DEPTH", None)
@@ -664,25 +835,25 @@ def generate_gitlab_ci_yaml(
# trying to build.
broken_specs_url = ""
known_broken_specs_encountered = []
- if "broken-specs-url" in gitlab_ci:
- broken_specs_url = gitlab_ci["broken-specs-url"]
+ if "broken-specs-url" in ci_config:
+ broken_specs_url = ci_config["broken-specs-url"]
enable_artifacts_buildcache = False
- if "enable-artifacts-buildcache" in gitlab_ci:
- enable_artifacts_buildcache = gitlab_ci["enable-artifacts-buildcache"]
+ if "enable-artifacts-buildcache" in ci_config:
+ enable_artifacts_buildcache = ci_config["enable-artifacts-buildcache"]
rebuild_index_enabled = True
- if "rebuild-index" in gitlab_ci and gitlab_ci["rebuild-index"] is False:
+ if "rebuild-index" in ci_config and ci_config["rebuild-index"] is False:
rebuild_index_enabled = False
temp_storage_url_prefix = None
- if "temporary-storage-url-prefix" in gitlab_ci:
- temp_storage_url_prefix = gitlab_ci["temporary-storage-url-prefix"]
+ if "temporary-storage-url-prefix" in ci_config:
+ temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
bootstrap_specs = []
phases = []
- if "bootstrap" in gitlab_ci:
- for phase in gitlab_ci["bootstrap"]:
+ if "bootstrap" in ci_config:
+ for phase in ci_config["bootstrap"]:
try:
phase_name = phase.get("name")
strip_compilers = phase.get("compiler-agnostic")
@@ -747,6 +918,27 @@ def generate_gitlab_ci_yaml(
shutil.copyfile(env.manifest_path, os.path.join(concrete_env_dir, "spack.yaml"))
shutil.copyfile(env.lock_path, os.path.join(concrete_env_dir, "spack.lock"))
+ with open(env.manifest_path, "r") as env_fd:
+ env_yaml_root = syaml.load(env_fd)
+ # Add config scopes to environment
+ env_includes = env_yaml_root["spack"].get("include", [])
+ cli_scopes = [
+ os.path.abspath(s.path)
+ for s in cfg.scopes().values()
+ if type(s) == cfg.ImmutableConfigScope
+ and s.path not in env_includes
+ and os.path.exists(s.path)
+ ]
+ include_scopes = []
+ for scope in cli_scopes:
+ if scope not in include_scopes and scope not in env_includes:
+ include_scopes.insert(0, scope)
+ env_includes.extend(include_scopes)
+ env_yaml_root["spack"]["include"] = env_includes
+
+ with open(os.path.join(concrete_env_dir, "spack.yaml"), "w") as fd:
+ fd.write(syaml.dump_config(env_yaml_root, default_flow_style=False))
+
job_log_dir = os.path.join(pipeline_artifacts_dir, "logs")
job_repro_dir = os.path.join(pipeline_artifacts_dir, "reproduction")
job_test_dir = os.path.join(pipeline_artifacts_dir, "tests")
@@ -758,7 +950,7 @@ def generate_gitlab_ci_yaml(
# generation job and the rebuild jobs. This can happen when gitlab
# checks out the project into a runner-specific directory, for example,
# and different runners are picked for generate and rebuild jobs.
- ci_project_dir = os.environ.get("CI_PROJECT_DIR")
+ ci_project_dir = os.environ.get("CI_PROJECT_DIR", os.getcwd())
rel_artifacts_root = os.path.relpath(pipeline_artifacts_dir, ci_project_dir)
rel_concrete_env_dir = os.path.relpath(concrete_env_dir, ci_project_dir)
rel_job_log_dir = os.path.relpath(job_log_dir, ci_project_dir)
@@ -772,7 +964,7 @@ def generate_gitlab_ci_yaml(
try:
bindist.binary_index.update()
except bindist.FetchCacheError as e:
- tty.error(e)
+ tty.warn(e)
staged_phases = {}
try:
@@ -829,6 +1021,9 @@ def generate_gitlab_ci_yaml(
else:
broken_spec_urls = web_util.list_url(broken_specs_url)
+ spack_ci = SpackCI(ci_config, phases, staged_phases)
+ spack_ci_ir = spack_ci.generate_ir()
+
before_script, after_script = None, None
for phase in phases:
phase_name = phase["name"]
@@ -856,7 +1051,7 @@ def generate_gitlab_ci_yaml(
spec_record["needs_rebuild"] = False
continue
- runner_attribs = _find_matching_config(release_spec, gitlab_ci)
+ runner_attribs = spack_ci_ir["jobs"][release_spec_dag_hash]["attributes"]
if not runner_attribs:
tty.warn("No match found for {0}, skipping it".format(release_spec))
@@ -887,23 +1082,21 @@ def generate_gitlab_ci_yaml(
except AttributeError:
image_name = build_image
- job_script = ["spack env activate --without-view ."]
+ if "script" not in runner_attribs:
+ raise AttributeError
- if artifacts_root:
- job_script.insert(0, "cd {0}".format(concrete_env_dir))
+ def main_script_replacements(cmd):
+ return cmd.replace("{env_dir}", concrete_env_dir)
- job_script.extend(["spack ci rebuild"])
-
- if "script" in runner_attribs:
- job_script = [s for s in runner_attribs["script"]]
+ job_script = _unpack_script(runner_attribs["script"], op=main_script_replacements)
before_script = None
if "before_script" in runner_attribs:
- before_script = [s for s in runner_attribs["before_script"]]
+ before_script = _unpack_script(runner_attribs["before_script"])
after_script = None
if "after_script" in runner_attribs:
- after_script = [s for s in runner_attribs["after_script"]]
+ after_script = _unpack_script(runner_attribs["after_script"])
osname = str(release_spec.architecture)
job_name = get_job_name(
@@ -1147,19 +1340,6 @@ def generate_gitlab_ci_yaml(
else:
tty.warn("Unable to populate buildgroup without CDash credentials")
- service_job_config = None
- if "service-job-attributes" in gitlab_ci:
- service_job_config = gitlab_ci["service-job-attributes"]
-
- default_attrs = [
- "image",
- "tags",
- "variables",
- "before_script",
- # 'script',
- "after_script",
- ]
-
service_job_retries = {
"max": 2,
"when": ["runner_system_failure", "stuck_or_timeout_failure", "script_failure"],
@@ -1171,55 +1351,29 @@ def generate_gitlab_ci_yaml(
# schedule a job to clean up the temporary storage location
# associated with this pipeline.
stage_names.append("cleanup-temp-storage")
- cleanup_job = {}
-
- if service_job_config:
- _copy_attributes(default_attrs, service_job_config, cleanup_job)
-
- if "tags" in cleanup_job:
- service_tags = _remove_reserved_tags(cleanup_job["tags"])
- cleanup_job["tags"] = service_tags
+ cleanup_job = copy.deepcopy(spack_ci_ir["jobs"]["cleanup"]["attributes"])
cleanup_job["stage"] = "cleanup-temp-storage"
- cleanup_job["script"] = [
- "spack -d mirror destroy --mirror-url {0}/$CI_PIPELINE_ID".format(
- temp_storage_url_prefix
- )
- ]
cleanup_job["when"] = "always"
cleanup_job["retry"] = service_job_retries
cleanup_job["interruptible"] = True
+ cleanup_job["script"] = _unpack_script(
+ cleanup_job["script"],
+ op=lambda cmd: cmd.replace("mirror_prefix", temp_storage_url_prefix),
+ )
+
output_object["cleanup"] = cleanup_job
if (
- "signing-job-attributes" in gitlab_ci
+ "script" in spack_ci_ir["jobs"]["signing"]["attributes"]
and spack_pipeline_type == "spack_protected_branch"
):
# External signing: generate a job to check and sign binary pkgs
stage_names.append("stage-sign-pkgs")
- signing_job_config = gitlab_ci["signing-job-attributes"]
- signing_job = {}
-
- signing_job_attrs_to_copy = [
- "image",
- "tags",
- "variables",
- "before_script",
- "script",
- "after_script",
- ]
-
- _copy_attributes(signing_job_attrs_to_copy, signing_job_config, signing_job)
+ signing_job = spack_ci_ir["jobs"]["signing"]["attributes"]
- signing_job_tags = []
- if "tags" in signing_job:
- signing_job_tags = _remove_reserved_tags(signing_job["tags"])
-
- for tag in ["aws", "protected", "notary"]:
- if tag not in signing_job_tags:
- signing_job_tags.append(tag)
- signing_job["tags"] = signing_job_tags
+ signing_job["script"] = _unpack_script(signing_job["script"])
signing_job["stage"] = "stage-sign-pkgs"
signing_job["when"] = "always"
@@ -1231,23 +1385,17 @@ def generate_gitlab_ci_yaml(
if rebuild_index_enabled:
# Add a final job to regenerate the index
stage_names.append("stage-rebuild-index")
- final_job = {}
-
- if service_job_config:
- _copy_attributes(default_attrs, service_job_config, final_job)
-
- if "tags" in final_job:
- service_tags = _remove_reserved_tags(final_job["tags"])
- final_job["tags"] = service_tags
+ final_job = spack_ci_ir["jobs"]["reindex"]["attributes"]
index_target_mirror = mirror_urls[0]
if remote_mirror_override:
index_target_mirror = remote_mirror_override
-
final_job["stage"] = "stage-rebuild-index"
- final_job["script"] = [
- "spack buildcache update-index --keys --mirror-url {0}".format(index_target_mirror)
- ]
+ final_job["script"] = _unpack_script(
+ final_job["script"],
+ op=lambda cmd: cmd.replace("{index_target_mirror}", index_target_mirror),
+ )
+
final_job["when"] = "always"
final_job["retry"] = service_job_retries
final_job["interruptible"] = True
@@ -1328,13 +1476,7 @@ def generate_gitlab_ci_yaml(
else:
# No jobs were generated
tty.debug("No specs to rebuild, generating no-op job")
- noop_job = {}
-
- if service_job_config:
- _copy_attributes(default_attrs, service_job_config, noop_job)
-
- if "script" not in noop_job:
- noop_job["script"] = ['echo "All specs already up to date, nothing to rebuild."']
+ noop_job = spack_ci_ir["jobs"]["noop"]["attributes"]
noop_job["retry"] = service_job_retries
@@ -1348,7 +1490,7 @@ def generate_gitlab_ci_yaml(
sys.exit(1)
with open(output_file, "w") as outf:
- outf.write(syaml.dump_config(sorted_output, default_flow_style=True))
+ outf.write(syaml.dump(sorted_output, default_flow_style=True))
def _url_encode_string(input_string):
@@ -1528,7 +1670,10 @@ def copy_files_to_artifacts(src, artifacts_dir):
try:
fs.copy(src, artifacts_dir)
except Exception as err:
- tty.warn(f"Unable to copy files ({src}) to artifacts {artifacts_dir} due to: {err}")
+ msg = ("Unable to copy files ({0}) to artifacts {1} due to " "exception: {2}").format(
+ src, artifacts_dir, str(err)
+ )
+ tty.warn(msg)
def copy_stage_logs_to_artifacts(job_spec, job_log_dir):
@@ -1748,6 +1893,7 @@ def reproduce_ci_job(url, work_dir):
function is a set of printed instructions for running docker and then
commands to run to reproduce the build once inside the container.
"""
+ work_dir = os.path.realpath(work_dir)
download_and_extract_artifacts(url, work_dir)
lock_file = fs.find(work_dir, "spack.lock")[0]
@@ -1912,7 +2058,9 @@ def reproduce_ci_job(url, work_dir):
if job_image:
inst_list.append("\nRun the following command:\n\n")
inst_list.append(
- " $ docker run --rm -v {0}:{1} -ti {2}\n".format(work_dir, mount_as_dir, job_image)
+ " $ docker run --rm --name spack_reproducer -v {0}:{1}:Z -ti {2}\n".format(
+ work_dir, mount_as_dir, job_image
+ )
)
inst_list.append("\nOnce inside the container:\n\n")
else:
@@ -1963,13 +2111,16 @@ def process_command(name, commands, repro_dir):
# Create a string [command 1] && [command 2] && ... && [command n] with commands
# quoted using double quotes.
args_to_string = lambda args: " ".join('"{}"'.format(arg) for arg in args)
- full_command = " && ".join(map(args_to_string, commands))
+ full_command = " \n ".join(map(args_to_string, commands))
# Write the command to a shell script
script = "{0}.sh".format(name)
with open(script, "w") as fd:
fd.write("#!/bin/sh\n\n")
fd.write("\n# spack {0} command\n".format(name))
+ fd.write("set -e\n")
+ if os.environ.get("SPACK_VERBOSE_SCRIPT"):
+ fd.write("set -x\n")
fd.write(full_command)
fd.write("\n")
diff --git a/lib/spack/spack/cmd/ci.py b/lib/spack/spack/cmd/ci.py
index 8e14d7b7ed..62edbaa6c4 100644
--- a/lib/spack/spack/cmd/ci.py
+++ b/lib/spack/spack/cmd/ci.py
@@ -255,10 +255,9 @@ def ci_rebuild(args):
# Make sure the environment is "gitlab-enabled", or else there's nothing
# to do.
- yaml_root = ev.config_dict(env.yaml)
- gitlab_ci = yaml_root["gitlab-ci"] if "gitlab-ci" in yaml_root else None
- if not gitlab_ci:
- tty.die("spack ci rebuild requires an env containing gitlab-ci cfg")
+ ci_config = cfg.get("ci")
+ if not ci_config:
+ tty.die("spack ci rebuild requires an env containing ci cfg")
tty.msg(
"SPACK_BUILDCACHE_DESTINATION={0}".format(
@@ -306,8 +305,10 @@ def ci_rebuild(args):
# Query the environment manifest to find out whether we're reporting to a
# CDash instance, and if so, gather some information from the manifest to
# support that task.
- cdash_handler = spack_ci.CDashHandler(yaml_root.get("cdash")) if "cdash" in yaml_root else None
- if cdash_handler:
+ cdash_config = cfg.get("cdash")
+ cdash_handler = None
+ if "build-group" in cdash_config:
+ cdash_handler = spack_ci.CDashHandler(cdash_config)
tty.debug("cdash url = {0}".format(cdash_handler.url))
tty.debug("cdash project = {0}".format(cdash_handler.project))
tty.debug("cdash project_enc = {0}".format(cdash_handler.project_enc))
@@ -340,13 +341,13 @@ def ci_rebuild(args):
pipeline_mirror_url = None
temp_storage_url_prefix = None
- if "temporary-storage-url-prefix" in gitlab_ci:
- temp_storage_url_prefix = gitlab_ci["temporary-storage-url-prefix"]
+ if "temporary-storage-url-prefix" in ci_config:
+ temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
pipeline_mirror_url = url_util.join(temp_storage_url_prefix, ci_pipeline_id)
enable_artifacts_mirror = False
- if "enable-artifacts-buildcache" in gitlab_ci:
- enable_artifacts_mirror = gitlab_ci["enable-artifacts-buildcache"]
+ if "enable-artifacts-buildcache" in ci_config:
+ enable_artifacts_mirror = ci_config["enable-artifacts-buildcache"]
if enable_artifacts_mirror or (
spack_is_pr_pipeline and not enable_artifacts_mirror and not temp_storage_url_prefix
):
@@ -551,7 +552,7 @@ def ci_rebuild(args):
commands = [
# apparently there's a race when spack bootstraps? do it up front once
- [SPACK_COMMAND, "-e", env.path, "bootstrap", "now"],
+ [SPACK_COMMAND, "-e", env.path, "bootstrap", "now", "--dev"],
[
SPACK_COMMAND,
"-e",
@@ -593,8 +594,8 @@ def ci_rebuild(args):
# avoid wasting compute cycles attempting to build those hashes.
if install_exit_code == INSTALL_FAIL_CODE and spack_is_develop_pipeline:
tty.debug("Install failed on develop")
- if "broken-specs-url" in gitlab_ci:
- broken_specs_url = gitlab_ci["broken-specs-url"]
+ if "broken-specs-url" in ci_config:
+ broken_specs_url = ci_config["broken-specs-url"]
dev_fail_hash = job_spec.dag_hash()
broken_spec_path = url_util.join(broken_specs_url, dev_fail_hash)
tty.msg("Reporting broken develop build as: {0}".format(broken_spec_path))
@@ -615,17 +616,14 @@ def ci_rebuild(args):
# the package, run them and copy the output. Failures of any kind should
# *not* terminate the build process or preclude creating the build cache.
broken_tests = (
- "broken-tests-packages" in gitlab_ci
- and job_spec.name in gitlab_ci["broken-tests-packages"]
+ "broken-tests-packages" in ci_config
+ and job_spec.name in ci_config["broken-tests-packages"]
)
reports_dir = fs.join_path(os.getcwd(), "cdash_report")
if args.tests and broken_tests:
- tty.warn(
- "Unable to run stand-alone tests since listed in "
- "gitlab-ci's 'broken-tests-packages'"
- )
+ tty.warn("Unable to run stand-alone tests since listed in " "ci's 'broken-tests-packages'")
if cdash_handler:
- msg = "Package is listed in gitlab-ci's broken-tests-packages"
+ msg = "Package is listed in ci's broken-tests-packages"
cdash_handler.report_skipped(job_spec, reports_dir, reason=msg)
cdash_handler.copy_test_results(reports_dir, job_test_dir)
elif args.tests:
@@ -688,8 +686,8 @@ def ci_rebuild(args):
# If this is a develop pipeline, check if the spec that we just built is
# on the broken-specs list. If so, remove it.
- if spack_is_develop_pipeline and "broken-specs-url" in gitlab_ci:
- broken_specs_url = gitlab_ci["broken-specs-url"]
+ if spack_is_develop_pipeline and "broken-specs-url" in ci_config:
+ broken_specs_url = ci_config["broken-specs-url"]
just_built_hash = job_spec.dag_hash()
broken_spec_path = url_util.join(broken_specs_url, just_built_hash)
if web_util.url_exists(broken_spec_path):
diff --git a/lib/spack/spack/config.py b/lib/spack/spack/config.py
index 1cb060a8b5..d31b1dd533 100644
--- a/lib/spack/spack/config.py
+++ b/lib/spack/spack/config.py
@@ -77,6 +77,8 @@ section_schemas = {
"config": spack.schema.config.schema,
"upstreams": spack.schema.upstreams.schema,
"bootstrap": spack.schema.bootstrap.schema,
+ "ci": spack.schema.ci.schema,
+ "cdash": spack.schema.cdash.schema,
}
# Same as above, but including keys for environments
@@ -360,6 +362,12 @@ class InternalConfigScope(ConfigScope):
if sk.endswith(":"):
key = syaml.syaml_str(sk[:-1])
key.override = True
+ elif sk.endswith("+"):
+ key = syaml.syaml_str(sk[:-1])
+ key.prepend = True
+ elif sk.endswith("-"):
+ key = syaml.syaml_str(sk[:-1])
+ key.append = True
else:
key = sk
@@ -1040,6 +1048,33 @@ def _override(string):
return hasattr(string, "override") and string.override
+def _append(string):
+ """Test if a spack YAML string is an override.
+
+ See ``spack_yaml`` for details. Keys in Spack YAML can end in `+:`,
+ and if they do, their values append lower-precedence
+ configs.
+
+ str, str : concatenate strings.
+ [obj], [obj] : append lists.
+
+ """
+ return getattr(string, "append", False)
+
+
+def _prepend(string):
+ """Test if a spack YAML string is an override.
+
+ See ``spack_yaml`` for details. Keys in Spack YAML can end in `+:`,
+ and if they do, their values prepend lower-precedence
+ configs.
+
+ str, str : concatenate strings.
+ [obj], [obj] : prepend lists. (default behavior)
+ """
+ return getattr(string, "prepend", False)
+
+
def _mark_internal(data, name):
"""Add a simple name mark to raw YAML/JSON data.
@@ -1102,7 +1137,57 @@ def get_valid_type(path):
raise ConfigError("Cannot determine valid type for path '%s'." % path)
-def merge_yaml(dest, source):
+def remove_yaml(dest, source):
+ """UnMerges source from dest; entries in source take precedence over dest.
+
+ This routine may modify dest and should be assigned to dest, in
+ case dest was None to begin with, e.g.:
+
+ dest = remove_yaml(dest, source)
+
+ In the result, elements from lists from ``source`` will not appear
+ as elements of lists from ``dest``. Likewise, when iterating over keys
+ or items in merged ``OrderedDict`` objects, keys from ``source`` will not
+ appear as keys in ``dest``.
+
+ Config file authors can optionally end any attribute in a dict
+ with `::` instead of `:`, and the key will remove the entire section
+ from ``dest``
+ """
+
+ def they_are(t):
+ return isinstance(dest, t) and isinstance(source, t)
+
+ # If source is None, overwrite with source.
+ if source is None:
+ return dest
+
+ # Source list is prepended (for precedence)
+ if they_are(list):
+ # Make sure to copy ruamel comments
+ dest[:] = [x for x in dest if x not in source]
+ return dest
+
+ # Source dict is merged into dest.
+ elif they_are(dict):
+ for sk, sv in source.items():
+ # always remove the dest items. Python dicts do not overwrite
+ # keys on insert, so this ensures that source keys are copied
+ # into dest along with mark provenance (i.e., file/line info).
+ unmerge = sk in dest
+ old_dest_value = dest.pop(sk, None)
+
+ if unmerge and not spack.config._override(sk):
+ dest[sk] = remove_yaml(old_dest_value, sv)
+
+ return dest
+
+ # If we reach here source and dest are either different types or are
+ # not both lists or dicts: replace with source.
+ return dest
+
+
+def merge_yaml(dest, source, prepend=False, append=False):
"""Merges source into dest; entries in source take precedence over dest.
This routine may modify dest and should be assigned to dest, in
@@ -1118,6 +1203,9 @@ def merge_yaml(dest, source):
Config file authors can optionally end any attribute in a dict
with `::` instead of `:`, and the key will override that of the
parent instead of merging.
+
+ `+:` will extend the default prepend merge strategy to include string concatenation
+ `-:` will change the merge strategy to append, it also includes string concatentation
"""
def they_are(t):
@@ -1129,8 +1217,12 @@ def merge_yaml(dest, source):
# Source list is prepended (for precedence)
if they_are(list):
- # Make sure to copy ruamel comments
- dest[:] = source + [x for x in dest if x not in source]
+ if append:
+ # Make sure to copy ruamel comments
+ dest[:] = [x for x in dest if x not in source] + source
+ else:
+ # Make sure to copy ruamel comments
+ dest[:] = source + [x for x in dest if x not in source]
return dest
# Source dict is merged into dest.
@@ -1147,7 +1239,7 @@ def merge_yaml(dest, source):
old_dest_value = dest.pop(sk, None)
if merge and not _override(sk):
- dest[sk] = merge_yaml(old_dest_value, sv)
+ dest[sk] = merge_yaml(old_dest_value, sv, _prepend(sk), _append(sk))
else:
# if sk ended with ::, or if it's new, completely override
dest[sk] = copy.deepcopy(sv)
@@ -1158,6 +1250,13 @@ def merge_yaml(dest, source):
return dest
+ elif they_are(str):
+ # Concatenate strings in prepend mode
+ if prepend:
+ return source + dest
+ elif append:
+ return dest + source
+
# If we reach here source and dest are either different types or are
# not both lists or dicts: replace with source.
return copy.copy(source)
@@ -1183,6 +1282,17 @@ def process_config_path(path):
front = syaml.syaml_str(front)
front.override = True
seen_override_in_path = True
+
+ elif front.endswith("+"):
+ front = front.rstrip("+")
+ front = syaml.syaml_str(front)
+ front.prepend = True
+
+ elif front.endswith("-"):
+ front = front.rstrip("-")
+ front = syaml.syaml_str(front)
+ front.append = True
+
result.append(front)
return result
diff --git a/lib/spack/spack/environment/environment.py b/lib/spack/spack/environment/environment.py
index f91ea2f051..689a357317 100644
--- a/lib/spack/spack/environment/environment.py
+++ b/lib/spack/spack/environment/environment.py
@@ -2193,6 +2193,7 @@ class Environment(object):
view = dict((name, view.to_dict()) for name, view in self.views.items())
else:
view = False
+
yaml_dict["view"] = view
if self.dev_specs:
diff --git a/lib/spack/spack/schema/cdash.py b/lib/spack/spack/schema/cdash.py
index 5634031b74..f0178babc0 100644
--- a/lib/spack/spack/schema/cdash.py
+++ b/lib/spack/spack/schema/cdash.py
@@ -15,7 +15,8 @@ properties = {
"cdash": {
"type": "object",
"additionalProperties": False,
- "required": ["build-group", "url", "project", "site"],
+ # "required": ["build-group", "url", "project", "site"],
+ "required": ["build-group"],
"patternProperties": {
r"build-group": {"type": "string"},
r"url": {"type": "string"},
diff --git a/lib/spack/spack/schema/ci.py b/lib/spack/spack/schema/ci.py
new file mode 100644
index 0000000000..3fcb7fc164
--- /dev/null
+++ b/lib/spack/spack/schema/ci.py
@@ -0,0 +1,181 @@
+# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
+# Spack Project Developers. See the top-level COPYRIGHT file for details.
+#
+# SPDX-License-Identifier: (Apache-2.0 OR MIT)
+
+"""Schema for gitlab-ci.yaml configuration file.
+
+.. literalinclude:: ../spack/schema/ci.py
+ :lines: 13-
+"""
+
+from llnl.util.lang import union_dicts
+
+# Schema for script fields
+# List of lists and/or strings
+# This is similar to what is allowed in
+# the gitlab schema
+script_schema = {
+ "type": "array",
+ "items": {"anyOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]},
+}
+
+# Additional attributes are allow
+# and will be forwarded directly to the
+# CI target YAML for each job.
+attributes_schema = {
+ "type": "object",
+ "properties": {
+ "image": {
+ "oneOf": [
+ {"type": "string"},
+ {
+ "type": "object",
+ "properties": {
+ "name": {"type": "string"},
+ "entrypoint": {"type": "array", "items": {"type": "string"}},
+ },
+ },
+ ]
+ },
+ "tags": {"type": "array", "items": {"type": "string"}},
+ "variables": {
+ "type": "object",
+ "patternProperties": {r"[\w\d\-_\.]+": {"type": "string"}},
+ },
+ "before_script": script_schema,
+ "script": script_schema,
+ "after_script": script_schema,
+ },
+}
+
+submapping_schema = {
+ "type": "object",
+ "additinoalProperties": False,
+ "required": ["submapping"],
+ "properties": {
+ "match_behavior": {"type": "string", "enum": ["first", "merge"], "default": "first"},
+ "submapping": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "additionalProperties": False,
+ "required": ["match"],
+ "properties": {
+ "match": {"type": "array", "items": {"type": "string"}},
+ "build-job": attributes_schema,
+ "build-job-remove": attributes_schema,
+ },
+ },
+ },
+ },
+}
+
+named_attributes_schema = {
+ "oneOf": [
+ {
+ "type": "object",
+ "additionalProperties": False,
+ "properties": {"noop-job": attributes_schema, "noop-job-remove": attributes_schema},
+ },
+ {
+ "type": "object",
+ "additionalProperties": False,
+ "properties": {"build-job": attributes_schema, "build-job-remove": attributes_schema},
+ },
+ {
+ "type": "object",
+ "additionalProperties": False,
+ "properties": {
+ "reindex-job": attributes_schema,
+ "reindex-job-remove": attributes_schema,
+ },
+ },
+ {
+ "type": "object",
+ "additionalProperties": False,
+ "properties": {
+ "signing-job": attributes_schema,
+ "signing-job-remove": attributes_schema,
+ },
+ },
+ {
+ "type": "object",
+ "additionalProperties": False,
+ "properties": {
+ "cleanup-job": attributes_schema,
+ "cleanup-job-remove": attributes_schema,
+ },
+ },
+ {
+ "type": "object",
+ "additionalProperties": False,
+ "properties": {"any-job": attributes_schema, "any-job-remove": attributes_schema},
+ },
+ ]
+}
+
+pipeline_gen_schema = {
+ "type": "array",
+ "items": {"oneOf": [submapping_schema, named_attributes_schema]},
+}
+
+core_shared_properties = union_dicts(
+ {
+ "pipeline-gen": pipeline_gen_schema,
+ "bootstrap": {
+ "type": "array",
+ "items": {
+ "anyOf": [
+ {"type": "string"},
+ {
+ "type": "object",
+ "additionalProperties": False,
+ "required": ["name"],
+ "properties": {
+ "name": {"type": "string"},
+ "compiler-agnostic": {"type": "boolean", "default": False},
+ },
+ },
+ ]
+ },
+ },
+ "rebuild-index": {"type": "boolean"},
+ "broken-specs-url": {"type": "string"},
+ "broken-tests-packages": {"type": "array", "items": {"type": "string"}},
+ "target": {"type": "string", "enum": ["gitlab"], "default": "gitlab"},
+ }
+)
+
+ci_properties = {
+ "anyOf": [
+ {
+ "type": "object",
+ "additionalProperties": False,
+ # "required": ["mappings"],
+ "properties": union_dicts(
+ core_shared_properties, {"enable-artifacts-buildcache": {"type": "boolean"}}
+ ),
+ },
+ {
+ "type": "object",
+ "additionalProperties": False,
+ # "required": ["mappings"],
+ "properties": union_dicts(
+ core_shared_properties, {"temporary-storage-url-prefix": {"type": "string"}}
+ ),
+ },
+ ]
+}
+
+#: Properties for inclusion in other schemas
+properties = {"ci": ci_properties}
+
+#: Full schema with metadata
+schema = {
+ "$schema": "http://json-schema.org/draft-07/schema#",
+ "title": "Spack CI configuration file schema",
+ "type": "object",
+ "additionalProperties": False,
+ "properties": properties,
+}
diff --git a/lib/spack/spack/schema/merged.py b/lib/spack/spack/schema/merged.py
index a6317ea0fd..b20700a03c 100644
--- a/lib/spack/spack/schema/merged.py
+++ b/lib/spack/spack/schema/merged.py
@@ -12,11 +12,11 @@ from llnl.util.lang import union_dicts
import spack.schema.bootstrap
import spack.schema.cdash
+import spack.schema.ci
import spack.schema.compilers
import spack.schema.concretizer
import spack.schema.config
import spack.schema.container
-import spack.schema.gitlab_ci
import spack.schema.mirrors
import spack.schema.modules
import spack.schema.packages
@@ -31,7 +31,7 @@ properties = union_dicts(
spack.schema.concretizer.properties,
spack.schema.config.properties,
spack.schema.container.properties,
- spack.schema.gitlab_ci.properties,
+ spack.schema.ci.properties,
spack.schema.mirrors.properties,
spack.schema.modules.properties,
spack.schema.packages.properties,
diff --git a/lib/spack/spack/test/cmd/ci.py b/lib/spack/spack/test/cmd/ci.py
index 4c39dc2869..b368b7d72d 100644
--- a/lib/spack/spack/test/cmd/ci.py
+++ b/lib/spack/spack/test/cmd/ci.py
@@ -28,8 +28,8 @@ import spack.util.gpg
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
from spack.schema.buildcache_spec import schema as specfile_schema
+from spack.schema.ci import schema as ci_schema
from spack.schema.database_index import schema as db_idx_schema
-from spack.schema.gitlab_ci import schema as gitlab_ci_schema
from spack.spec import CompilerSpec, Spec
from spack.util.pattern import Bunch
@@ -177,26 +177,29 @@ spack:
- [$old-gcc-pkgs]
mirrors:
some-mirror: {0}
- gitlab-ci:
+ ci:
bootstrap:
- name: bootstrap
compiler-agnostic: true
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- arch=test-debian6-core2
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
- match:
- arch=test-debian6-m1
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
- service-job-attributes:
- image: donotcare
- tags: [donotcare]
+ - cleanup-job:
+ image: donotcare
+ tags: [donotcare]
+ - reindex-job:
+ script:: [hello, world]
cdash:
build-group: Not important
url: https://my.fake.cdash
@@ -239,6 +242,10 @@ spack:
def _validate_needs_graph(yaml_contents, needs_graph, artifacts):
+ """Validate the needs graph in the generate CI"""
+
+ # TODO: Fix the logic to catch errors where expected packages/needs are not
+ # found.
for job_name, job_def in yaml_contents.items():
for needs_def_name, needs_list in needs_graph.items():
if job_name.startswith(needs_def_name):
@@ -269,27 +276,30 @@ def test_ci_generate_bootstrap_gcc(
spack:
definitions:
- bootstrap:
- - gcc@9.5
- - gcc@9.0
+ - gcc@3.0
specs:
- - dyninst%gcc@9.5
+ - dyninst%gcc@3.0
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
+ ci:
bootstrap:
- name: bootstrap
compiler-agnostic: true
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- arch=test-debian6-x86_64
- runner-attributes:
+ build-job:
tags:
- donotcare
- match:
- arch=test-debian6-aarch64
- runner-attributes:
+ build-job:
tags:
- donotcare
+ - any-job:
+ tags:
+ - donotcare
"""
)
@@ -326,26 +336,30 @@ def test_ci_generate_bootstrap_artifacts_buildcache(
spack:
definitions:
- bootstrap:
- - gcc@9.5
+ - gcc@3.0
specs:
- - dyninst%gcc@9.5
+ - dyninst%gcc@3.0
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
+ ci:
bootstrap:
- name: bootstrap
compiler-agnostic: true
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- arch=test-debian6-x86_64
- runner-attributes:
+ build-job:
tags:
- donotcare
- match:
- arch=test-debian6-aarch64
- runner-attributes:
+ build-job:
tags:
- donotcare
+ - any-job:
+ tags:
+ - donotcare
enable-artifacts-buildcache: True
"""
)
@@ -398,7 +412,7 @@ spack:
"""
)
- expect_out = 'Error: Environment yaml does not have "gitlab-ci" section'
+ expect_out = 'Error: Environment yaml does not have "ci" section'
with tmpdir.as_cwd():
env_cmd("create", "test", "./spack.yaml")
@@ -427,12 +441,13 @@ spack:
- archive-files
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
+ ci:
enable-artifacts-buildcache: True
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- archive-files
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
@@ -485,11 +500,12 @@ spack:
- archive-files
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
- mappings:
+ ci:
+ pipeline-gen:
+ - submapping:
- match:
- archive-files
- runner-attributes:
+ build-job:
tags:
- donotcare
variables:
@@ -576,17 +592,18 @@ spack:
- flatten-deps
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
+ ci:
enable-artifacts-buildcache: True
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- flatten-deps
- runner-attributes:
+ build-job:
tags:
- donotcare
- match:
- dependency-install
- runner-attributes:
+ build-job:
tags:
- donotcare
"""
@@ -642,22 +659,23 @@ spack:
- flatten-deps
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
+ ci:
enable-artifacts-buildcache: True
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- flatten-deps
- runner-attributes:
+ build-job:
tags:
- donotcare
- match:
- dependency-install
- runner-attributes:
+ build-job:
tags:
- donotcare
- service-job-attributes:
- image: donotcare
- tags: [donotcare]
+ - cleanup-job:
+ image: donotcare
+ tags: [donotcare]
rebuild-index: False
"""
)
@@ -703,12 +721,13 @@ spack:
- externaltest
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
- mappings:
+ ci:
+ pipeline-gen:
+ - submapping:
- match:
- archive-files
- externaltest
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
@@ -744,7 +763,7 @@ spack:
env_cmd("create", "test", "./spack.yaml")
env_cmd("activate", "--without-view", "--sh", "test")
out = ci_cmd("rebuild", fail_on_error=False)
- assert "env containing gitlab-ci" in out
+ assert "env containing ci" in out
env_cmd("deactivate")
@@ -785,17 +804,18 @@ spack:
- $packages
mirrors:
test-mirror: {1}
- gitlab-ci:
+ ci:
broken-specs-url: {2}
broken-tests-packages: {3}
temporary-storage-url-prefix: {4}
- mappings:
- - match:
- - {0}
- runner-attributes:
- tags:
- - donotcare
- image: donotcare
+ pipeline-gen:
+ - submapping:
+ - match:
+ - {0}
+ build-job:
+ tags:
+ - donotcare
+ image: donotcare
cdash:
build-group: Not important
url: https://my.fake.cdash
@@ -875,10 +895,9 @@ def activate_rebuild_env(tmpdir, pkg_name, rebuild_env):
@pytest.mark.parametrize("broken_tests", [True, False])
def test_ci_rebuild_mock_success(
tmpdir,
- config,
working_env,
mutable_mock_env_path,
- install_mockery,
+ install_mockery_mutable_config,
mock_gnupghome,
mock_stage,
mock_fetch,
@@ -914,7 +933,7 @@ def test_ci_rebuild(
tmpdir,
working_env,
mutable_mock_env_path,
- install_mockery,
+ install_mockery_mutable_config,
mock_packages,
monkeypatch,
mock_gnupghome,
@@ -1014,12 +1033,13 @@ spack:
- $packages
mirrors:
test-mirror: {0}
- gitlab-ci:
+ ci:
enable-artifacts-buildcache: True
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- archive-files
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
@@ -1101,18 +1121,19 @@ spack:
- $packages
mirrors:
test-mirror: {0}
- gitlab-ci:
- mappings:
+ ci:
+ pipeline-gen:
+ - submapping:
- match:
- patchelf
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
- service-job-attributes:
- tags:
- - nonbuildtag
- image: basicimage
+ - cleanup-job:
+ tags:
+ - nonbuildtag
+ image: basicimage
""".format(
mirror_url
)
@@ -1183,19 +1204,24 @@ spack:
- $packages
mirrors:
test-mirror: {0}
- gitlab-ci:
+ ci:
enable-artifacts-buildcache: True
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- patchelf
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
- service-job-attributes:
- tags:
- - nonbuildtag
- image: basicimage
+ - cleanup-job:
+ tags:
+ - nonbuildtag
+ image: basicimage
+ - any-job:
+ tags:
+ - nonbuildtag
+ image: basicimage
""".format(
mirror_url
)
@@ -1345,56 +1371,58 @@ spack:
- a
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
- tags:
- - toplevel
- - toplevel2
- variables:
- ONE: toplevelvarone
- TWO: toplevelvartwo
- before_script:
- - pre step one
- - pre step two
- script:
- - main step
- after_script:
- - post step one
- match_behavior: {0}
- mappings:
- - match:
- - flatten-deps
- runner-attributes:
- tags:
- - specific-one
- variables:
- THREE: specificvarthree
- - match:
- - dependency-install
- - match:
- - a
- remove-attributes:
- tags:
- - toplevel2
- runner-attributes:
- tags:
- - specific-a
- variables:
- ONE: specificvarone
- TWO: specificvartwo
- before_script:
- - custom pre step one
- script:
- - custom main step
- after_script:
- - custom post step one
- - match:
- - a
- runner-attributes:
- tags:
- - specific-a-2
- service-job-attributes:
- image: donotcare
- tags: [donotcare]
+ ci:
+ pipeline-gen:
+ - match_behavior: {0}
+ submapping:
+ - match:
+ - flatten-deps
+ build-job:
+ tags:
+ - specific-one
+ variables:
+ THREE: specificvarthree
+ - match:
+ - dependency-install
+ - match:
+ - a
+ build-job:
+ tags:
+ - specific-a-2
+ - match:
+ - a
+ build-job-remove:
+ tags:
+ - toplevel2
+ build-job:
+ tags:
+ - specific-a
+ variables:
+ ONE: specificvarone
+ TWO: specificvartwo
+ before_script::
+ - - custom pre step one
+ script::
+ - - custom main step
+ after_script::
+ - custom post step one
+ - build-job:
+ tags:
+ - toplevel
+ - toplevel2
+ variables:
+ ONE: toplevelvarone
+ TWO: toplevelvartwo
+ before_script:
+ - - pre step one
+ - pre step two
+ script::
+ - - main step
+ after_script:
+ - - post step one
+ - cleanup-job:
+ image: donotcare
+ tags: [donotcare]
""".format(
match_behavior
)
@@ -1420,8 +1448,6 @@ spack:
assert global_vars["SPACK_CHECKOUT_VERSION"] == "12ad69eb1"
for ci_key in yaml_contents.keys():
- if "(specs) b" in ci_key:
- assert False
if "(specs) a" in ci_key:
# Make sure a's attributes override variables, and all the
# scripts. Also, make sure the 'toplevel' tag doesn't
@@ -1495,10 +1521,11 @@ spack:
- callpath%gcc@9.5
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
- mappings:
+ ci:
+ pipeline-gen:
+ - submapping:
- match: ['%gcc@9.5']
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
@@ -1550,11 +1577,12 @@ spack:
- callpath
mirrors:
test-mirror: {0}
- gitlab-ci:
- mappings:
+ ci:
+ pipeline-gen:
+ - submapping:
- match:
- patchelf
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
@@ -1642,29 +1670,30 @@ spack:
- b%gcc@12.2.0
mirrors:
atestm: {0}
- gitlab-ci:
+ ci:
bootstrap:
- name: bootstrap
compiler-agnostic: true
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- arch=test-debian6-x86_64
- runner-attributes:
+ build-job:
tags:
- donotcare
- match:
- arch=test-debian6-core2
- runner-attributes:
+ build-job:
tags:
- meh
- match:
- arch=test-debian6-aarch64
- runner-attributes:
+ build-job:
tags:
- donotcare
- match:
- arch=test-debian6-m1
- runner-attributes:
+ build-job:
tags:
- meh
""".format(
@@ -1743,14 +1772,12 @@ spack:
- callpath
mirrors:
some-mirror: {0}
- gitlab-ci:
- mappings:
- - match:
- - arch=test-debian6-core2
- runner-attributes:
- tags:
- - donotcare
- image: donotcare
+ ci:
+ pipeline-gen:
+ - build-job:
+ tags:
+ - donotcare
+ image: donotcare
""".format(
mirror_url
)
@@ -1879,11 +1906,12 @@ def test_ci_subcommands_without_mirror(
spack:
specs:
- archive-files
- gitlab-ci:
- mappings:
+ ci:
+ pipeline-gen:
+ - submapping:
- match:
- archive-files
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
@@ -1912,12 +1940,13 @@ def test_ensure_only_one_temporary_storage():
"""Make sure 'gitlab-ci' section of env does not allow specification of
both 'enable-artifacts-buildcache' and 'temporary-storage-url-prefix'."""
gitlab_ci_template = """
- gitlab-ci:
+ ci:
{0}
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- notcheckedhere
- runner-attributes:
+ build-job:
tags:
- donotcare
"""
@@ -1933,21 +1962,21 @@ def test_ensure_only_one_temporary_storage():
# User can specify "enable-artifacts-buildcache" (boolean)
yaml_obj = syaml.load(gitlab_ci_template.format(enable_artifacts))
- jsonschema.validate(yaml_obj, gitlab_ci_schema)
+ jsonschema.validate(yaml_obj, ci_schema)
# User can also specify "temporary-storage-url-prefix" (string)
yaml_obj = syaml.load(gitlab_ci_template.format(temp_storage))
- jsonschema.validate(yaml_obj, gitlab_ci_schema)
+ jsonschema.validate(yaml_obj, ci_schema)
# However, specifying both should fail to validate
yaml_obj = syaml.load(gitlab_ci_template.format(specify_both))
with pytest.raises(jsonschema.ValidationError):
- jsonschema.validate(yaml_obj, gitlab_ci_schema)
+ jsonschema.validate(yaml_obj, ci_schema)
# Specifying neither should be fine too, as neither of these properties
# should be required
yaml_obj = syaml.load(gitlab_ci_template.format(specify_neither))
- jsonschema.validate(yaml_obj, gitlab_ci_schema)
+ jsonschema.validate(yaml_obj, ci_schema)
def test_ci_generate_temp_storage_url(
@@ -1969,12 +1998,13 @@ spack:
- archive-files
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
+ ci:
temporary-storage-url-prefix: file:///work/temp/mirror
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- archive-files
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
@@ -2040,15 +2070,16 @@ spack:
- a
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
+ ci:
broken-specs-url: "{0}"
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- a
- flatten-deps
- b
- dependency-install
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
@@ -2089,26 +2120,27 @@ spack:
- archive-files
mirrors:
some-mirror: https://my.fake.mirror
- gitlab-ci:
+ ci:
temporary-storage-url-prefix: file:///work/temp/mirror
- mappings:
+ pipeline-gen:
+ - submapping:
- match:
- archive-files
- runner-attributes:
+ build-job:
tags:
- donotcare
image: donotcare
- signing-job-attributes:
- tags:
- - nonbuildtag
- - secretrunner
- image:
- name: customdockerimage
- entrypoint: []
- variables:
- IMPORTANT_INFO: avalue
- script:
- - echo hello
+ - signing-job:
+ tags:
+ - nonbuildtag
+ - secretrunner
+ image:
+ name: customdockerimage
+ entrypoint: []
+ variables:
+ IMPORTANT_INFO: avalue
+ script::
+ - echo hello
"""
)
@@ -2151,11 +2183,12 @@ spack:
- $packages
mirrors:
test-mirror: file:///some/fake/mirror
- gitlab-ci:
- mappings:
+ ci:
+ pipeline-gen:
+ - submapping:
- match:
- archive-files
- runner-attributes:
+ build-job:
tags:
- donotcare
image: {0}
@@ -2232,7 +2265,9 @@ spack:
working_dir.strpath,
output=str,
)
- expect_out = "docker run --rm -v {0}:{0} -ti {1}".format(working_dir.strpath, image_name)
+ expect_out = "docker run --rm --name spack_reproducer -v {0}:{0}:Z -ti {1}".format(
+ os.path.realpath(working_dir.strpath), image_name
+ )
assert expect_out in rep_out