summaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
authorMichael Sternberg <sternberg@anl.gov>2018-08-29 23:09:34 -0500
committerscheibelp <scheibel1@llnl.gov>2018-08-29 21:09:34 -0700
commita86f22d755887c66b03b62c8be2fa708b79fad0a (patch)
tree15f6aa056527ca8abaed0c758d882a9645668f4c /lib
parente860307c31567df9e270673a89a4e4974e4bc11f (diff)
downloadspack-a86f22d755887c66b03b62c8be2fa708b79fad0a.tar.gz
spack-a86f22d755887c66b03b62c8be2fa708b79fad0a.tar.bz2
spack-a86f22d755887c66b03b62c8be2fa708b79fad0a.tar.xz
spack-a86f22d755887c66b03b62c8be2fa708b79fad0a.zip
Intel prefixes (#7469)
Consolidate prefix calculation logic for intel packages into the IntelPackage class. Add documentation on installing Intel packages with Spack an (alternatively) adding them as external packages in Spack.
Diffstat (limited to 'lib')
-rw-r--r--lib/spack/docs/basic_usage.rst3
-rw-r--r--lib/spack/docs/build_systems/intelpackage.rst1054
-rw-r--r--lib/spack/docs/getting_started.rst3
-rw-r--r--lib/spack/docs/module_file_support.rst3
-rw-r--r--lib/spack/docs/packaging_guide.rst2
-rw-r--r--lib/spack/docs/tutorial_configuration.rst2
-rw-r--r--lib/spack/spack/build_systems/README-intel.rst660
-rw-r--r--lib/spack/spack/build_systems/intel.py1256
8 files changed, 2885 insertions, 98 deletions
diff --git a/lib/spack/docs/basic_usage.rst b/lib/spack/docs/basic_usage.rst
index ae180c0659..622860b75f 100644
--- a/lib/spack/docs/basic_usage.rst
+++ b/lib/spack/docs/basic_usage.rst
@@ -596,6 +596,9 @@ name or compiler specifier to their left in the spec.
If the compiler spec is omitted, Spack will choose a default compiler
based on site policies.
+
+.. _basic-variants:
+
^^^^^^^^
Variants
^^^^^^^^
diff --git a/lib/spack/docs/build_systems/intelpackage.rst b/lib/spack/docs/build_systems/intelpackage.rst
index a21a0beb32..c03bbc30e0 100644
--- a/lib/spack/docs/build_systems/intelpackage.rst
+++ b/lib/spack/docs/build_systems/intelpackage.rst
@@ -4,10 +4,1052 @@
IntelPackage
------------
-Intel provides many licensed software packages, which all share the
-same basic steps for configuring and installing, as well as license
-management.
+.. contents::
-This build system is a work-in-progress. See
-https://github.com/spack/spack/pull/4300 and
-https://github.com/spack/spack/pull/7469 for more information.
+^^^^^^^^^^^^^^^^^^^^^^^^
+Intel packages in Spack
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Spack can install and use several software development products offered by Intel.
+Some of these are available under no-cost terms, others require a paid license.
+All share the same basic steps for configuration, installation, and, where
+applicable, license management. The Spack Python class ``IntelPackage`` implements
+these steps.
+
+Spack interacts with Intel tools in several routes, like it does for any
+other package:
+
+.. _`route 1`:
+
+1. Accept system-provided tools after you declare them to Spack as *external packages*.
+
+.. _`route 2`:
+
+2. Install the products for you as *internal packages* in Spack.
+
+.. _`route 3`:
+
+3. *Use* the packages, regardless of installation route, to install what we'll
+ call *client packages* for you, this being Spack's primary purpose.
+
+An auxiliary route follows from route 2, as it would for most Spack
+packages, namely:
+
+.. _`route 4`:
+
+4. Make Spack-installed Intel tools available outside of Spack for ad-hoc use,
+ typically through Spack-managed modulefiles.
+
+This document covers routes 1 through 3.
+
+
+""""""""""""""""""""""""""""""""""
+Packages under no-cost license
+""""""""""""""""""""""""""""""""""
+
+Intel's standalone performance library products, notably MKL and MPI, are
+available for use under a `simplified license
+<https://software.intel.com/en-us/license/intel-simplified-software-license>`_
+since 2017 [fn1]_. They are packaged in Spack as:
+
+* ``intel-mkl`` -- Math Kernel Library (linear algebra and FFT),
+* ``intel-mpi`` -- The Intel-MPI implementation (derived from MPICH),
+* ``intel-ipp`` -- Primitives for image-, signal-, and data-processing,
+* ``intel-daal`` -- Machine learning and data analytics.
+
+Some earlier versions of these libraries were released under a paid license.
+For these older versions, the license must be available at installation time of
+the products and during compilation of client packages.
+
+The library packages work well with the Intel compilers but do not require them
+-- those packages can just as well be used with other compilers. The Intel
+compiler invocation commands offer custom options to simplify linking Intel
+libraries (sometimes considerably), but Spack always uses fairly explicit
+linkage anyway.
+
+
+""""""""""""""""""
+Licensed packages
+""""""""""""""""""
+
+Intel's core software development products that provide compilers, analyzers,
+and optimizers do require a paid license. In Spack, they are packaged as:
+
+* ``intel-parallel-studio`` -- the entire suite of compilers and libraries,
+* ``intel`` -- a subset containing just the compilers and the Intel-MPI runtime [fn2]_.
+
+..
+ TODO: Confirm and possible change(!) the scope of MPI components (runtime
+ vs. devel) in current (and previous?) *cluster/professional/composer*
+ editions, i.e., presence in downloads, possibly subject to license
+ coverage(!); see `disussion in PR #4300
+ <https://github.com/spack/spack/pull/4300#issuecomment-305582898>`_. [NB:
+ An "mpi" subdirectory is not indicative of the full MPI SDK being present
+ (i.e., ``mpicc``, ..., and header files). The directory may just as well
+ contain only the MPI runtime (``mpirun`` and shared libraries) .]
+ See also issue #8632.
+
+The license is needed at installation time and to compile client packages, but
+never to merely run any resulting binaries. The license status for a given
+Spack package is normally specified in the *package code* through directives like
+`license_required` (see :ref:`Licensed software <license>`).
+For the Intel packages, however, the *class code* provides these directives (in
+exchange of forfeiting a measure of OOP purity) and takes care of idiosyncasies
+like historic version dependence.
+
+The libraries that are provided in the standalone packages are also included in the
+all-encompassing ``intel-parallel-studio``. To complicate matters a bit, that
+package is sold in 3 "editions", of which only the upper-tier ``cluster``
+edition supports *compiling* MPI applications, and hence only that edition can
+provide the ``mpi`` virtual package. (As mentioned [fn2]_, all editions
+provide support for *running* MPI applications.)
+
+The edition forms the leading part of the version number for Spack's
+``intel*`` packages discussed here. This differs from the primarily numeric
+version numbers seen with most other Spack packages. For example, we have:
+
+
+.. code-block:: console
+
+ $ spack info intel-parallel-studio
+ ...
+ Preferred version:
+ professional.2018.3 http:...
+
+ Safe versions:
+ professional.2018.3 http:...
+ ...
+ composer.2018.3 http:...
+ ...
+ cluster.2018.3 http:...
+ ...
+ ...
+
+The full studio suite, capable of compiling MPI applications, currently
+requires about 12 GB of disk space when installed (see section `Install steps
+for packages with compilers and libraries`_ for detailed instructions).
+If you need to save disk space or installation time, you could install the
+``intel`` compilers-only subset (0.6 GB) and just the library packages you
+need, for example ``intel-mpi`` (0.5 GB) and ``intel-mkl`` (2.5 GB).
+
+
+""""""""""""""""""""
+Unrelated packages
+""""""""""""""""""""
+
+The following packages do not use the Intel installer and are not in class ``IntelPackage``
+that is discussed here:
+
+* ``intel-gpu-tools`` -- Test suite and low-level tools for the Linux `Direct
+ Rendering Manager <https://en.wikipedia.org/wiki/Direct_Rendering_Manager>`_
+* ``intel-mkl-dnn`` -- Math Kernel Library for Deep Neural Networks (``CMakePackage``)
+* ``intel-xed`` -- X86 machine instructions encoder/decoder
+* ``intel-tbb`` -- Standalone version of Intel Threading Building Blocks. Note that
+ a TBB runtime version is included with ``intel-mkl``, and development
+ versions are provided by the packages ``intel-parallel-studio`` (all
+ editions) and its ``intel`` subset.
+
+""""""""""""""""""""""""""""""""""""""""""
+Configuring Spack to use Intel licenses
+""""""""""""""""""""""""""""""""""""""""""
+
+If you wish to integrate licensed Intel products into Spack as external packages
+(`route 1`_ above) we assume that their license configuration is in place and
+is working [fn3]_. In this case, skip to section `Integration of Intel tools
+installed external to Spack`_.
+
+If you plan to have Spack install licensed products for you (`route 2`_ above),
+the Intel product installer that Spack will run underneath must have access to
+a license that is either provided by a *license server* or as a *license file*.
+The installer may be able to locate a license that is already configured on
+your system. If it cannot, you must configure Spack to provide either the
+server location or the license file.
+
+For authoritative information on Intel licensing, see:
+
+* https://software.intel.com/en-us/faq/licensing
+* https://software.intel.com/en-us/articles/how-do-i-manage-my-licenses
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Pointing to an existing license server
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Installing and configuring a license server is outside the scope of Spack. We
+assume that:
+
+* Your system administrator has a license server running.
+* The license server offers valid licenses for the Intel packages of interest.
+* You can access these licenses under the user id running Spack.
+
+Be aware of the difference between (a) installing and configuring a license
+server, and (b) configuring client software to *use* a server's
+so-called floating licenses. We are concerned here with (b) only. The
+process of obtaining a license from a server for temporary use is called
+"checking out a license". For that, a client application such as the Intel
+package installer or a compiler needs to know the host name and port number of
+one or more license servers that it may query [fn4]_.
+
+Follow one of three methods to `point client software to a floating license server
+<https://software.intel.com/en-us/articles/licensing-setting-up-the-client-floating-license>`_.
+Ideally, your license administrator will already have implemented one that can
+be used unchanged in Spack: Look for the environment variable
+``INTEL_LICENSE_FILE`` or for files
+``/opt/intel/licenses/*.lic`` that contain::
+
+ SERVER hostname hostid_or_ANY portnum
+ USE_SERVER
+
+The relevant tokens, among possibly others, are the ``USE_SERVER`` line,
+intended specifically for clients, and one or more ``SERVER`` lines above it
+which give the network address.
+
+If you cannot find pre-existing ``/opt/intel/licenses/*.lic`` files and the
+``INTEL_LICENSE_FILE`` environment variable is not set (even after you loaded
+any relevant modulefiles), ask your license administrator for the server
+address(es) and place them in a "global" license file within your Spack
+directory tree `as shown below <Spack-managed file_>`_).
+
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Installing a standalone license file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you purchased a user-specific license, follow `Intel's instructions
+<https://software.intel.com/en-us/faq/licensing#license-management>`_
+to "activate" it for your serial number, then download the resulting license file.
+If needed, `request to have the file re-sent
+<https://software.intel.com/en-us/articles/resend-license-file>`_ to you.
+
+Intel's license files are text files that contain tokens in the proprietary
+"FLEXlm" format and whose name ends in ``.lic``.
+Intel installers and compilers look for license files in several locations when they run.
+Place your license by one of the following means, in order of decreasing preference:
+
+* Default directory
+
+ Install your license file in the directory ``/opt/intel/licenses/`` if you
+ have write permission to it. This directory is inspected by all Intel tools
+ and is therefore preferred, as no further configuration will be needed.
+ Create the directory if it does not yet exist. For the file name, either
+ keep the downloaded name or use another suitably plain yet descriptive
+ name that ends in ``.lic``. Adjust file permissions for access by licensed
+ users.
+
+
+* Directory given in environment variable
+
+ If you cannot use the default directory, but your system already has set the
+ environment variable ``INTEL_LICENSE_FILE`` independent from Spack [fn5]_,
+ then, if you have the necessary write permissions, place your license file in
+ one of the directories mentioned in this environment variable. Adjust file
+ permissions to match licensed users.
+
+ .. tip::
+
+ If your system has not yet set and used the environment variable
+ ``INTEL_LICENSE_FILE``, you could start using it with the ``spack
+ install`` stage of licensed tools and subsequent client packages. You
+ would, however, be in a bind to always set that variable in the same
+ manner, across updates and re-installations, and perhaps accommodate
+ additions to it. As this may be difficult in the long run, we recommend
+ that you do *not* attempt to start using the variable solely for Spack.
+
+.. _`Spack-managed file`:
+
+* Spack-managed file
+
+ The first time Spack encounters an Intel package that requires a license, it
+ will initialize a Spack-global Intel-specific license file for you, as a
+ template with instructional comments, and bring up an editor [fn6]_. Spack
+ will do this *even if you have a working license elsewhere* on the system.
+
+ * To proceed with an externally configured license, leave the newly templated
+ file as is (containing comments only) and close the editor. You do not need
+ to touch the file again.
+
+ * To configure your own standalone license, copy the contents of your
+ downloaded license file into the opened file, save it, and close the editor.
+
+ * To use a license server (i.e., a floating network license) that is not
+ already configured elsewhere on the system, supply your license server
+ address(es) in the form of ``SERVER`` and ``USE_SERVER`` lines at the
+ *beginning of the file* [fn7]_, in the format shown in section `Pointing to
+ an existing license server`_. Save the file and close the editor.
+
+ To revisit and manually edit this file, such as prior to a subsequent
+ installation attempt, find it at
+ ``$SPACK_ROOT/etc/spack/licenses/intel/intel.lic`` .
+
+ Spack will place symbolic links to this file in each directory where licensed
+ Intel binaries were installed. If you kept the template unchanged, Intel tools
+ will simply ignore it.
+
+
+.. _integrate-external-intel:
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Integration of Intel tools installed *external* to Spack
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This section discusses `route 1`_ from the introduction.
+
+A site that already uses Intel tools, especially licensed ones, will likely
+have some versions already installed on the system, especially at a time when
+Spack is just being introduced. It will be useful to make such previously
+installed tools available for use by Spack as they are. How to do this varies
+depending on the type of the tools:
+
+""""""""""""""""""""""""""""""""""
+Integrating external compilers
+""""""""""""""""""""""""""""""""""
+
+For Spack to use external Intel compilers, you must tell it both *where* to
+find them and *when* to use them. The present section documents the "where"
+aspect, involving ``compilers.yaml`` and, in most cases, long absolute paths.
+The "when" aspect actually relates to `route 3`_ and requires explicitly
+stating the compiler as a spec component (in the form ``foo %intel`` or ``foo
+%intel@compilerversion``) when installing client packages or altering Spack's
+compiler default in ``packages.yaml``.
+See section `Selecting Intel compilers <Selecting Intel compilers_>`_ for details.
+
+To integrate a new set of externally installed Intel compilers into Spack
+follow section
+:ref:`Compiler configuration <compiler-config>`.
+Briefly, prepare your shell environment like you would if you were to use these
+compilers normally, i.e., typically by a ``module load ...`` or a shell
+``source ...`` command, then use ``spack compiler find`` to make Spack aware of
+these compilers. This will create a new entry in a suitably scoped and possibly new
+``compilers.yaml`` file. You could certainly create such a compiler entry
+manually, but this is error-prone due to the indentation and different data
+types involved.
+
+The Intel compilers need and use the system's native GCC compiler (``gcc`` on
+most systems, ``clang`` on macOS) to provide certain functionality, notably to
+support C++. To provide a different GCC compiler for the Intel tools, or more
+generally set persistent flags for all invocations of the Intel compilers, locate
+the ``compilers.yaml`` entry that defines your Intel compiler, and, using a
+text editor, change one or both of the following:
+
+1. At the ``modules:`` tag, add a ``gcc`` module to the list.
+2. At the ``flags:`` tag, add ``cflags:``, ``cxxflags:``, and ``fflags:`` key-value entries.
+
+Consult the examples under
+:ref:`Compiler configuration <compiler-config>`
+and
+:ref:`Vendor-Specific Compiler Configuration <vendor-specific-compiler-configuration>`
+in the Spack documentation.
+When done, validate your compiler definition by running
+``spack compiler info intel@compilerversion`` (replacing ``compilerversion`` by
+the version that you defined).
+
+Be aware that both the GCC integration and persistent compiler flags can also be
+affected by an advanced third method:
+
+3. A modulefile that provides the Intel compilers for you
+ could, for the benefit of users outside of Spack, implicitly
+ integrate a specific ``gcc`` version via compiler flag environment variables
+ or (hopefully not) via a sneaky extra ``PATH`` addition.
+
+Next, visit section `Selecting Intel Compilers`_ to learn how to tell
+Spack to use the newly configured compilers.
+
+""""""""""""""""""""""""""""""""""
+Integrating external libraries
+""""""""""""""""""""""""""""""""""
+
+Configure external library-type packages (as opposed to compilers)
+in the files ``$SPACK_ROOT/etc/spack/packages.yaml`` or
+``~/.spack/packages.yaml``, following the Spack documentation under
+:ref:`External Packages <sec-external-packages>`.
+
+Similar to ``compilers.yaml``, the ``packages.yaml`` files define a package
+external to Spack in terms of a Spack spec and resolve each such spec via
+either the ``paths`` or ``modules`` tokens to a specific pre-installed package
+version on the system. Since Intel tools generally need environment variables
+to interoperate, which cannot be conveyed in a mere ``paths`` specification,
+the ``modules`` token will be more sensible to use. It resolves the Spack-side
+spec to a modulefile generated and managed outside of Spack's purview,
+which Spack will load internally and transiently when the corresponding spec is
+called upon to compile client packages.
+
+Unlike for compilers, where ``spack find compilers [spec]`` generates an entry
+in an existing or new ``compilers.yaml`` file, Spack does not offer a command
+to generate an entirely new ``packages.yaml`` entry. You must create
+new entries yourself in a text editor, though the command ``spack config
+[--scope=...] edit packages`` can help with selecting the proper file.
+See section
+:ref:`Configuration Scopes <configuration-scopes>`
+for an explanation about the different files
+and section
+:ref:`Build customization <build-settings>`
+for specifics and examples for ``packages.yaml`` files.
+
+.. If your system administrator did not provide modules for pre-installed Intel
+ tools, you could do well to ask for them, because installing multiple copies
+ of the Intel tools, as is wont to happen once Spack is in the picture, is
+ bound to stretch disk space and patience thin. If you *are* the system
+ administrator and are still new to modules, then perhaps it's best to follow
+ the `next section <Installing Intel tools within Spack_>`_ and install the tools
+ solely within Spack.
+
+The following example integrates packages embodied by hypothetical
+external modulefiles ``intel-mkl/18/...`` into
+Spack as packages ``intel-mkl@...``:
+
+.. code-block:: console
+
+ $ spack config edit packages
+
+Make sure the file begins with:
+
+.. code-block:: yaml
+
+ packages:
+
+Adapt the following example. Be sure to maintain the indentation:
+
+.. code-block:: yaml
+
+ # other content ...
+
+ intel-mkl:
+ modules:
+ intel-mkl@2018.2.199 arch=linux-centos6-x86_64: intel-mkl/18/18.0.2
+ intel-mkl@2018.3.222 arch=linux-centos6-x86_64: intel-mkl/18/18.0.3
+
+The version numbers for the ``intel-mkl`` specs defined here correspond to file
+and directory names that Intel uses for its products because they were adopted
+and declared as such within Spack's package repository. You can inspect the
+versions known to your current Spack installation by:
+
+.. code-block:: console
+
+ $ spack info intel-mkl
+
+Using the same version numbers for external packages as for packages known
+internally is useful for clarity, but not strictly necessary. Moreover, with a
+``packages.yaml`` entry, you can go beyond internally known versions.
+
+.. _compiler-neutral-package:
+
+Note that the Spack spec in the example does not contain a compiler
+specification. This is intentional, as the Intel library packages can be used
+unmodified with different compilers.
+
+A slightly more advanced example illustrates how to provide
+:ref:`variants <basic-variants>`
+and how to use the ``buildable: False`` directive to prevent Spack from installing
+other versions or variants of the named package through its normal internal
+mechanism.
+
+.. code-block:: yaml
+
+ packages:
+ intel-parallel-studio:
+ modules:
+ intel-parallel-studio@cluster.2018.2.199 +mkl+mpi+ipp+tbb+daal arch=linux-centos6-x86_64: intel/18/18.0.2
+ intel-parallel-studio@cluster.2018.3.222 +mkl+mpi+ipp+tbb+daal arch=linux-centos6-x86_64: intel/18/18.0.3
+ buildable: False
+
+One additional example illustrates the use of ``paths:`` instead of
+``modules:``, useful when external modulefiles are not available or not
+suitable:
+
+.. code-block:: yaml
+
+ packages:
+ intel-parallel-studio:
+ paths:
+ intel-parallel-studio@cluster.2018.2.199 +mkl+mpi+ipp+tbb+daal: /opt/intel
+ intel-parallel-studio@cluster.2018.3.222 +mkl+mpi+ipp+tbb+daal: /opt/intel
+ buildable: False
+
+Note that for the Intel packages discussed here, the directory values in the
+``paths:`` entries must be the high-level and typically version-less
+"installation directory" that has been used by Intel's product installer.
+Such a directory will typically accumulate various product versions. Amongst
+them, Spack will select the correct version-specific product directory based on
+the ``@version`` spec component that each path is being defined for.
+
+For further background and details, see
+:ref:`External Packages <sec-external-packages>`.
+
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Installing Intel tools *within* Spack
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This section discusses `route 2`_ from the introduction.
+
+When a system does not yet have Intel tools installed already, or the installed
+versions are undesirable, Spack can install these tools like any regular Spack
+package for you and, with appropriate pre- and post-install configuration, use its
+compilers and/or libraries to install client packages.
+
+.. _intel-install-studio:
+
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+Install steps for packages with compilers and libraries
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+The packages ``intel-parallel-studio`` and ``intel`` (which is a subset of the
+former) are many-in-one products that contain both compilers and a set of
+library packages whose scope depends on the edition.
+Because they are general products geared towards shell environments,
+it can be somewhat involved to integrate these packages at their full extent
+into Spack.
+
+Note: To install library-only packages like ``intel-mkl``, ``intel-mpi``, and ``intel-daal``
+follow `the next section <intel-install-libs_>`_ instead.
+
+1. Review the section `Configuring spack to use intel licenses`_.
+
+.. _intel-compiler-anticipation:
+
+2. To install a version of ``intel-parallel-studio`` that provides Intel
+ compilers at a version that you have *not yet declared in Spack*,
+ the following preparatory steps are recommended:
+
+ A. Determine the compiler spec that the new ``intel-parallel-studio`` package
+ will provide, as follows: From the package version, combine the last two
+ digits of the version year, a literal "0" (zero), and the version component
+ that immediately follows the year.
+
+ ========================================== ======================
+ Package version Compiler spec provided
+ ------------------------------------------ ----------------------
+ ``intel-parallel-studio@edition.YYyy.u`` ``intel@yy.0.u``
+ ========================================== ======================
+
+ Example: The package ``intel-parallel-studio@cluster.2018.3`` will provide
+ the compiler with spec ``intel@18.0.3``.
+
+ .. _`config-compiler-anticipated`:
+
+ B. Add a new compiler section with the newly anticipated version at the
+ end of a ``compilers.yaml`` file in a suitable scope. For example, run:
+
+ .. code-block:: console
+
+ $ spack config --scope=user/linux edit compilers
+
+ and append a stub entry:
+
+ .. code-block:: yaml
+
+ - compiler:
+ target: x86_64
+ operating_system: centos6
+ modules: []
+ spec: intel@18.0.3
+ paths:
+ cc: stub
+ cxx: stub
+ f77: stub
+ fc: stub
+
+ Replace ``18.0.3`` with the version that you determined in the preceeding
+ step. The contents under ``paths:`` do not matter yet.
+
+ You are right to ask: "Why on earth is that necessary?" [fn8]_.
+ The answer lies in Spack striving for strict compiler consistency.
+ Consider what happens without such a pre-declared compiler stub:
+ Say, you ask Spack to install a particular version
+ ``intel-parallel-studio@edition.V``. Spack will apply an unrelated compiler
+ spec to concretize and install your request, resulting in
+ ``intel-parallel-studio@edition.V %X``. That compiler ``%X`` is not going to
+ be the version that this new package itself provides. Rather, it would
+ typically be ``%gcc@...`` in a default Spack installation or possibly indeed
+ ``%intel@...``, but at a version that precedes ``V``.
+
+ The problem comes to the fore as soon as you try to use any virtual ``mkl``
+ or ``mpi`` packages that you would expect to now be provided by
+ ``intel-parallel-studio@edition.V``. Spack will indeed see those virtual
+ packages, but only as being tied to the compiler that the package
+ ``intel-parallel-studio@edition.V`` was concretized with *at installation*.
+ If you were to install a client package with the new compilers now available
+ to you, you would naturally run ``spack install foo +mkl %intel@V``, yet
+ Spack will either complain about ``mkl%intel@V`` being missing (because it
+ only knows about ``mkl%X``) or it will go and attempt to install *another
+ instance* of ``intel-parallel-studio@edition.V %intel@V`` so as to match the
+ compiler spec ``%intel@V`` that you gave for your client package ``foo``.
+ This will be unexpected and will quickly get annoying because each
+ reinstallation takes up time and extra disk space.
+
+ To escape this trap, put the compiler stub declaration shown here in place,
+ then use that pre-declared compiler spec to install the actual package, as
+ shown next. This approach works because during installation only the
+ package's own self-sufficient installer will be used, not any compiler.
+
+ .. _`verify-compiler-anticipated`:
+
+3. Verify that the compiler version provided by the new ``studio`` version
+ would be used as expected if you were to compile a client package:
+
+ .. code-block:: console
+
+ $ spack spec zlib %intel
+
+ If the version does not match, explicitly state the anticipated compiler version, e.g.:
+
+ .. code-block:: console
+
+ $ spack spec zlib %intel@18.0.3
+
+ if there are problems, review and correct the compiler's ``compilers.yaml``
+ entry, be it still in stub form or already complete (as it would be for a
+ re-installation).
+
+4. Install the new ``studio`` package using Spack's regular ``install``
+ command.
+ It may be wise to provide the anticipated compiler (`see above
+ <verify-compiler-anticipated_>`_) as an explicit concretization
+ element:
+
+ .. code-block:: console
+
+ $ spack install intel-parallel-studio@cluster.2018.3 %intel@18.0.3
+
+5. Follow the same steps as under `Integrating external compilers`_ to tell
+ Spack the minutiae for actually using those compilers with client packages.
+ If you placed a stub entry in a ``compilers.yaml`` file, now is the time to
+ edit it and fill in the particulars.
+
+ * Under ``paths:``, give the full paths to the actual compiler binaries (``icc``,
+ ``ifort``, etc.) located within the Spack installation tree, in all their
+ unsightly length [fn9]_.
+
+ To determine the full path to the C compiler, adapt and run:
+
+ .. code-block:: console
+
+ $ find `spack location -i intel-parallel-studio@cluster.2018.3` \
+ -name icc -type f -ls
+
+ If you get hits for both ``intel64`` and ``ia32``, you almost certainly will
+ want to use the ``intel64`` variant. The ``icpc`` and ``ifort`` compilers
+ will be located in the same directory as ``icc``.
+
+ * Use the ``modules:`` and/or ``cflags:`` tokens to specify a suitable accompanying
+ ``gcc`` version to help pacify picky client packages that ask for C++
+ standards more recent than supported by your system-provided ``gcc`` and its
+ ``libstdc++.so``.
+
+ * To set the Intel compilers for default use in Spack, instead of the usual ``%gcc``,
+ follow section `Selecting Intel compilers`_.
+
+.. tip::
+
+ Compiler packages like ``intel-parallel-studio`` can easily be above 10 GB
+ in size, which can tax the disk space available for temporary files on
+ small, busy, or restricted systems (like virtual machines). The Intel
+ installer will stop and report insufficient space as::
+
+ ==> './install.sh' '--silent' 'silent.cfg'
+ ...
+ Missing critical prerequisite
+ -- Not enough disk space
+
+ As first remedy, clean Spack's existing staging area:
+
+ .. code-block:: console
+
+ $ spack clean --stage
+
+ then retry installing the large package. Spack normally cleans staging
+ directories but certain failures may prevent it from doing so.
+
+ If the error persists, tell Spack to use an alternative location for
+ temporary files:
+
+ 1. Run ``df -h`` to identify an alternative location on your system.
+
+ 2. Tell Spack to use that location for staging. Do **one** of the following:
+
+ * Run Spack with the environment variable ``TMPDIR`` altered for just a
+ single command. For example, to use your ``$HOME`` directory:
+
+ .. code-block:: console
+
+ $ TMPDIR="$HOME/spack-stage" spack install ....
+
+ This example uses Bourne shell syntax. Adapt for other shells as needed.
+
+ * Alternatively, customize
+ Spack's ``build_stage`` :ref:`configuration setting <config-overrides>`.
+
+ .. code-block:: console
+
+ $ spack config edit config
+
+ Append:
+
+ .. code-block:: yaml
+
+ config:
+ build_stage:
+ - /home/$user/spack-stage
+
+ Do not duplicate the ``config:`` line if it already is present.
+ Adapt the location, which here is the same as in the preceeding example.
+
+ 3. Retry installing the large package.
+
+
+.. _intel-install-libs:
+
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+Install steps for library-only packages
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+To install library-only packages like ``intel-mkl``, ``intel-mpi``, and ``intel-daal``
+follow the steps given here.
+For packages that contain a compiler, follow `the previous section
+<intel-install-studio_>`_ instead.
+
+1. For pre-2017 product releases, review the section `Configuring Spack to use Intel licenses`_.
+
+2. Inspect the package spec. Specify an explicit compiler if necessary, e.g.:
+
+ .. code-block:: console
+
+ $ spack spec intel-mpi@2018.3.199
+ $ spack spec intel-mpi@2018.3.199 %intel
+
+ Check that the package will use the compiler flavor and version that you expect.
+
+3. Install the package normally within Spack. Use the same spec as in the
+ previous command, i.e., as general or as specific as needed:
+
+ .. code-block:: console
+
+ $ spack install intel-mpi@2018.3.199
+ $ spack install intel-mpi@2018.3.199 %intel@18
+
+4. To prepare the new packages for use with client packages,
+ follow `Selecting libraries to satisfy virtual packages`_.
+
+
+""""""""""""""""
+Debug notes
+""""""""""""""""
+
+* You can trigger a wall of additional diagnostics using Spack options, e.g.:
+
+ .. code-block:: console
+
+ $ spack --debug -v install intel-mpi
+
+ The ``--debug`` option can also be useful while installing client
+ packages `(see below) <Using Intel tools in Spack to install client
+ packages_>`_ to confirm the integration of the Intel tools in Spack, notably
+ MKL and MPI.
+
+* The ``.spack/`` subdirectory of an installed ``IntelPackage`` will contain,
+ besides Spack's usual archival items, a copy of the ``silent.cfg`` file that
+ was passed to the Intel installer:
+
+ .. code-block:: console
+
+ $ grep COMPONENTS ...intel-mpi...<hash>/.spack/silent.cfg
+ COMPONENTS=ALL
+
+* If an installation error occurs, Spack will normally clean up and remove a
+ partially installed target directory. You can direct Spack to keep it using
+ ``--keep-prefix``, e.g.:
+
+ .. code-block:: console
+
+ $ spack install --keep-prefix intel-mpi
+
+ You must, however, *remove such partial installations* prior to subsequent
+ installation attempts. Otherwise, the Intel installer will behave
+ incorrectly.
+
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Using Intel tools in Spack to install client packages
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Finally, this section pertains to `route 3`_ from the introduction.
+
+Once Intel tools are installed within Spack as external or internal packages
+they can be used as intended for installing client packages.
+
+
+.. _`select-intel-compilers`:
+
+""""""""""""""""""""""""""
+Selecting Intel compilers
+""""""""""""""""""""""""""
+
+Select Intel compilers to compile client packages, like any compiler in Spack,
+by one of the following means:
+
+* Request the Intel compilers explicitly in the client spec, e.g.:
+
+ .. code-block:: console
+
+ $ spack install libxc@3.0.0%intel
+
+
+* Alternatively, request Intel compilers implicitly by concretization preferences.
+ Configure the order of compilers in the appropriate ``packages.yaml`` file,
+ under either an ``all:`` or client-package-specific entry, in a
+ ``compiler:`` list. Consult the Spack documentation for
+ :ref:`Configuring Package Preferences <configs-tutorial-package-prefs>`
+ and
+ :ref:`Concretization Preferences <concretization-preferences>`.
+
+Example: ``etc/spack/packages.yaml`` might simply contain:
+
+.. code-block:: yaml
+
+ packages:
+ all:
+ compiler: [ intel, gcc, ]
+
+To be more specific, you can state partial or full compiler version numbers,
+for example:
+
+.. code-block:: yaml
+
+ packages:
+ all:
+ compiler: [ intel@18, intel@17, gcc@4.4.7, gcc@4.9.3, gcc@7.3.0, ]
+
+
+
+""""""""""""""""""""""""""""""""""""""""""""""""
+Selecting libraries to satisfy virtual packages
+""""""""""""""""""""""""""""""""""""""""""""""""
+
+Intel packages, whether integrated into Spack as external packages or
+installed within Spack, can be called upon to satisfy the requirement of a
+client package for a library that is available from different providers.
+The relevant virtual packages for Intel are ``blas``, ``lapack``,
+``scalapack``, and ``mpi``.
+
+In both integration routes, Intel packages can have optional
+:ref:`variants <basic-variants>`
+which alter the list of virtual packages they can satisfy. For Spack-external
+packages, the active variants are a combination of the defaults declared in
+Spack's package repository and the spec it is declared as in ``packages.yaml``.
+Needless to say, those should match the components that are actually present in
+the external product installation. Likewise, for Spack-internal packages, the
+active variants are determined, persistently at installation time, from the
+defaults in the repository and the spec selected to be installed.
+
+To have Intel packages satisfy virtual package requests for all or selected
+client packages, edit the ``packages.yaml`` file. Customize, either in the
+``all:`` or a more specific entry, a ``providers:`` dictionary whose keys are
+the virtual packages and whose values are the Spack specs that satisfy the
+virtual package, in order of decreasing preference. To learn more about the
+``providers:`` settings, see the Spack tutorial for
+:ref:`Configuring Package Preferences <configs-tutorial-package-prefs>`
+and the section
+:ref:`Concretization Preferences <concretization-preferences>`.
+
+Example: The following fairly minimal example for ``packages.yaml`` shows how
+to exclusively use the standalone ``intel-mkl`` package for all the linear
+algebra virtual packages in Spack, and ``intel-mpi`` as the preferred MPI
+implementation. Other providers can still be chosen on a per-package basis.
+
+.. code-block:: yaml
+
+ packages:
+ all:
+ providers:
+ mpi: [intel-mpi]
+ blas: [intel-mkl]
+ lapack: [intel-mkl]
+ scalapack: [intel-mkl]
+
+If you have access to the ``intel-parallel-studio@cluster`` edition, you can
+use instead:
+
+.. code-block:: yaml
+
+ all:
+ providers:
+ mpi: [intel-parallel-studio+mpi]
+ # Note: +mpi vs. +mkl
+ blas: [intel-parallel-studio+mkl]
+ lapack: [intel-parallel-studio+mkl]
+ scalapack: [intel-parallel-studio+mkl]
+
+If you installed ``intel-parallel-studio`` within Spack ("`route 2`_"), make
+sure you followed the `special installation step
+<intel-compiler-anticipation_>`_ to ensure that its virtual packages match the
+compilers it provides.
+
+
+""""""""""""""""""""""""""""""""""""""""""""
+Using Intel tools as explicit dependency
+""""""""""""""""""""""""""""""""""""""""""""
+
+With the proper installation as detailed above, no special steps should be
+required when a client package specifically (and thus deliberately) requests an
+Intel package as dependency, this being one of the target use cases for Spack.
+
+
+"""""""""""""""""""""""""""""""""""""""""""""""
+Tips for configuring client packages to use MKL
+"""""""""""""""""""""""""""""""""""""""""""""""
+
+The Math Kernel Library (MKL) is provided by several Intel packages, currently
+``intel-parallel-studio`` when variant ``+mkl`` is active (it is by default)
+and the standalone ``intel-mkl``. Because of these different provider packages,
+a *virtual* ``mkl`` package is declared in Spack.
+
+* To use MKL-specific APIs in a client package:
+
+ Declare a dependency on ``mkl``, rather than a specific provider like
+ ``intel-mkl``. Declare the dependency either absolutely or conditionally
+ based on variants that your package might have declared:
+
+ .. code-block:: python
+
+ # Examples for absolute and conditional dependencies:
+ depends_on('mkl')
+ depends_on('mkl', when='+mkl')
+ depends_on('mkl', when='fftw=mkl')
+
+ The ``MKLROOT`` environment variable (part of the documented API) will be set
+ during all stages of client package installation, and is available to both
+ the Spack packaging code and the client code.
+
+* To use MKL as provider for BLAS, LAPACK, or ScaLAPACK:
+
+ The packages that provide ``mkl`` also provide the narrower
+ virtual ``blas``, ``lapack``, and ``scalapack`` packages.
+ See the relevant :ref:`Packaging Guide section <blas_lapack_scalapack>`
+ for an introduction.
+ To portably use these virtual packages, construct preprocessor and linker
+ option strings in your package configuration code using the package functions
+ ``.headers`` and ``.libs`` in conjunction with utility functions from the
+ following classes:
+
+ * :py:class:`llnl.util.filesystem.FileList`,
+ * :py:class:`llnl.util.filesystem.HeaderList`,
+ * :py:class:`llnl.util.filesystem.LibraryList`.
+
+ .. tip::
+ *Do not* use constructs like ``.prefix.include`` or ``.prefix.lib``, with
+ Intel or any other implementation of ``blas``, ``lapack``, and
+ ``scalapack``.
+
+ For example, for an
+ :ref:`AutotoolsPackage <autotoolspackage>`
+ use ``.libs.ld_flags`` to transform the library file list into linker options
+ passed to ``./configure``:
+
+ .. code-block:: python
+
+ def configure_args(self):
+ args = []
+ ...
+ args.append('--with-blas=%s' % self.spec['blas'].libs.ld_flags)
+ args.append('--with-lapack=%s' % self.spec['lapack'].libs.ld_flags)
+ ...
+
+ .. tip::
+ Even though ``.ld_flags`` will return a string of multiple words, *do not*
+ use quotes for options like ``--with-blas=...`` because Spack passes them
+ to ``./configure`` without invoking a shell.
+
+ Likewise, in a
+ :ref:`MakefilePackage <makefilepackage>`
+ or similiar package that does not use AutoTools you may need to provide include
+ and link options for use on command lines or in environment variables.
+ For example, to generate an option string of the form ``-I<dir>``, use:
+
+ .. code-block:: python
+
+ self.spec['blas'].headers.include_flags
+
+ and to generate linker options (``-L<dir> -llibname ...``), use the same as above,
+
+ .. code-block:: python
+
+ self.spec['blas'].libs.ld_flags
+
+ See
+ :ref:`MakefilePackage <makefilepackage>`
+ and more generally the
+ :ref:`Packaging Guide <blas_lapack_scalapack>`
+ for background and further examples.
+
+
+^^^^^^^^^^
+Footnotes
+^^^^^^^^^^
+
+.. [fn1] Strictly speaking, versions from ``2017.2`` onward.
+
+.. [fn2] The package ``intel`` intentionally does not have a ``+mpi`` variant since
+ it is meant to be small. The native installer will always add MPI *runtime*
+ components because it follows defaults defined in the download package, even
+ when ``intel-parallel-studio ~mpi`` has been requested.
+
+ For ``intel-parallel-studio +mpi``, the class function
+ :py:func:``.IntelPackage.pset_components``
+ will include ``"intel-mpi intel-imb"`` in a list of component patterns passed
+ to the Intel installer. The installer will extend each pattern word with an
+ implied glob-like ``*`` to resolve it to package names that are
+ *actually present in the product BOM*.
+ As a side effect, this pattern approach accommodates occasional package name
+ changes, e.g., capturing both ``intel-mpirt`` and ``intel-mpi-rt`` .
+
+.. [fn3] How could the external installation have succeeded otherwise?
+
+.. [fn4] According to Intel's documentation, there is supposedly a way to install a
+ product using a network license even `when a FLEXlm server is not running
+ <https://software.intel.com/en-us/articles/licensing-setting-up-the-client-floating-license>`_:
+ Specify the license in the form ``port@serverhost`` in the
+ ``INTEL_LICENSE_FILE`` environment variable. All other means of specifying a
+ network license require that the license server be up.
+
+.. [fn5] Despite the name, ``INTEL_LICENSE_FILE`` can hold several and diverse entries.
+ They can be either directories (presumed to contain ``*.lic`` files), file
+ names, or network locations in the form ``port@host`` (on Linux and Mac),
+ with all items separated by ":" (on Linux and Mac).
+
+.. [fn6] Should said editor turn out to be ``vi``, you better be in a position
+ to know how to use it.
+
+.. [fn7] Comment lines in FLEXlm files, indicated by ``#`` as the first
+ non-whitespace character on the line, are generally allowed anywhere in the file.
+ There `have been reports <https://github.com/spack/spack/issues/6534>`_,
+ however, that as of 2018, ``SERVER`` and ``USE_SERVER`` lines must precede
+ any comment lines.
+
+..
+ .. [fnX] The name component ``intel`` of the compiler spec is separate from (in
+ a different namespace than) the names of the Spack packages
+ ``intel-parallel-studio`` and ``intel``. Both of the latter provide the former.
+
+.. [fn8] Spack's close coupling of installed packages to compilers, which both
+ necessitates the detour for installing ``intel-parallel-studio``, and
+ largely limits any of its provided virtual packages to a single compiler, heavily
+ favors `recommending to install Intel Parallel Studio outside of Spack
+ <integrate-external-intel_>`_ and declare it for Spack in ``packages.yaml``
+ by a `compiler-less spec <compiler-neutral-package_>`_.
+
+.. [fn9] With some effort, you can convince Spack to use shorter paths.
+
+ .. warning:: Altering the naming scheme means that Spack will lose track of
+ all packages it has installed for you so far.
+ That said, the time is right for this kind of customization
+ when you are defining a new set of compilers.
+
+ The relevant tunables are:
+
+ 1. Set the ``install_tree`` location in ``config.yaml``
+ (:ref:`see doc <config-yaml>`).
+ 2. Set the hash length in ``install-path-scheme``, also in ``config.yaml``
+ (:ref:`q.v. <config-yaml>`).
+ 3. You will want to set the *same* hash length for
+ :ref:`tcl module files <modules-naming-scheme>`
+ if you have Spack produce them for you, under ``naming_scheme`` in
+ ``modules.yaml``. Other module dialects cannot be altered in this manner.
diff --git a/lib/spack/docs/getting_started.rst b/lib/spack/docs/getting_started.rst
index 1ea3c1a0e9..aaaeb9dec7 100644
--- a/lib/spack/docs/getting_started.rst
+++ b/lib/spack/docs/getting_started.rst
@@ -484,6 +484,9 @@ simple package. For example:
$ spack install zlib%gcc@5.3.0
+
+.. _vendor-specific-compiler-configuration:
+
--------------------------------------
Vendor-Specific Compiler Configuration
--------------------------------------
diff --git a/lib/spack/docs/module_file_support.rst b/lib/spack/docs/module_file_support.rst
index 41e1245d9a..e699f4244a 100644
--- a/lib/spack/docs/module_file_support.rst
+++ b/lib/spack/docs/module_file_support.rst
@@ -479,6 +479,9 @@ you will prevent the generation of module files for any package that
is compiled with ``gcc@4.4.7``, with the only exception of any ``gcc``
or any ``llvm`` installation.
+
+.. _modules-naming-scheme:
+
"""""""""""""""""""""""""""
Customize the naming scheme
"""""""""""""""""""""""""""
diff --git a/lib/spack/docs/packaging_guide.rst b/lib/spack/docs/packaging_guide.rst
index 12abcfd7ea..683bb01503 100644
--- a/lib/spack/docs/packaging_guide.rst
+++ b/lib/spack/docs/packaging_guide.rst
@@ -2759,6 +2759,8 @@ is handy when a package supports additional variants like
variant('openmp', default=True, description="Enable OpenMP support.")
+.. _blas_lapack_scalapack:
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Blas, Lapack and ScaLapack libraries
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/lib/spack/docs/tutorial_configuration.rst b/lib/spack/docs/tutorial_configuration.rst
index f185b8b722..2e97eab590 100644
--- a/lib/spack/docs/tutorial_configuration.rst
+++ b/lib/spack/docs/tutorial_configuration.rst
@@ -325,6 +325,8 @@ license server, you can set this in ``compilers.yaml`` as follows:
...
+.. _configs-tutorial-package-prefs:
+
-------------------------------
Configuring Package Preferences
-------------------------------
diff --git a/lib/spack/spack/build_systems/README-intel.rst b/lib/spack/spack/build_systems/README-intel.rst
new file mode 100644
index 0000000000..6efbd09dd4
--- /dev/null
+++ b/lib/spack/spack/build_systems/README-intel.rst
@@ -0,0 +1,660 @@
+====================================
+Development Notes on Intel Packages
+====================================
+
+These are notes for concepts and development of
+lib/spack/spack/build_systems/intel.py .
+
+For documentation on how to *use* ``IntelPackage``, see
+lib/spack/docs/build_systems/intelpackage.rst .
+
+-------------------------------------------------------------------------------
+Installation and path handling as implemented in ./intel.py
+-------------------------------------------------------------------------------
+
+
+***************************************************************************
+Prefix differences between Spack-external and Spack-internal installations
+***************************************************************************
+
+
+Problem summary
+~~~~~~~~~~~~~~~~
+
+For Intel packages that were installed external to Spack, ``self.prefix`` will
+be a *component-specific* path (e.g. to an MKL-specific dir hierarchy), whereas
+for a package installed by Spack itself, ``self.prefix`` will be a
+*vendor-level* path that holds one or more components (or parts thereof), and
+must be further qualified down to a particular desired component.
+
+It is possible that a similar conceptual difference is inherent to other
+package families that use a common vendor-style installer.
+
+
+Description
+~~~~~~~~~~~~
+
+Spack makes packages available through two routes, let's call them A and B:
+
+A. Packages pre-installed external to Spack and configured *for* Spack
+B. Packages built and installed *by* Spack.
+
+For a user who is interested in building end-user applications, it should not
+matter through which route any of its dependent packages has been installed.
+Most packages natively support a ``prefix`` concept which unifies the two
+routes just fine.
+
+Intel packages, however, are more complicated because they consist of a number
+of components that are released as a suite of varying extent, like "Intel
+Parallel Studio *Foo* Edition", or subsetted into products like "MKL" or "MPI",
+each of which also contain libraries from other components like the compiler
+runtime and multithreading libraries. For this reason, an Intel package is
+"anchored" during installation at a directory level higher than just the
+user-facing directory that has the conventional hierarchy of ``bin``, ``lib``,
+and others relevant for the end-product.
+
+As a result, internal to Spack, there is a conceptual difference in what
+``self.prefix`` represents for the two routes.
+
+For route A, consider MKL installed outside of Spack. It will likely be one
+product component among other products, at one particular release among others
+that are installed in sibling or cousin directories on the local system.
+Therefore, the path given to Spack in ``packages.yaml`` should be a
+*product-specific and fully version-specific* directory. E.g., for an
+``intel-mkl`` package, ``self.prefix`` should look like::
+
+ /opt/intel/compilers_and_libraries_2018.1.163/linux/mkl
+
+In this route, the interaction point with the user is encapsulated in an
+environment variable which will be (in pseudo-code)::
+
+ MKLROOT := {self.prefix}
+
+For route B, a Spack-based installation of MKL will be placed in the directory
+given to the ``./install.sh`` script of Intel's package distribution. This
+directory is taken to be the *vendor*-specific anchor directory, playing the
+same role as the default ``/opt/intel``. In this case, ``self.prefix`` will
+be::
+
+ $SPACK_ROOT/opt/spack/linux-centos6-x86_64/gcc-4.9.3/intel-mkl-2018.1.163-<HASH>
+
+However, now the environment variable will have to be constructed as *several
+directory levels down*::
+
+ MKLROOT := {self.prefix}/compilers_and_libraries_2018.1.163/linux/mkl
+
+A recent post on the Spack mailing list illustrates the confusion when route A
+was taken while route B was the only one that was coded in Spack:
+https://groups.google.com/d/msg/spack/x28qlmqPAys/Ewx6220uAgAJ
+
+
+Solution
+~~~~~~~~~
+
+Introduce a series of functions which will return the appropriate
+directories, regardless of whether the Intel package has been installed
+external or internal to Spack:
+
+========================== ==================================================
+Function Example return values
+-------------------------- --------------------------------------------------
+normalize_suite_dir() Spack-external installation:
+ /opt/intel/compilers_and_libraries_2018.1.163
+ Spack-internal installation:
+ $SPACK_ROOT/...<HASH>/compilers_and_libraries_2018.1.163
+-------------------------- --------------------------------------------------
+normalize_path('mkl') <suite_dir>/linux/mkl
+component_bin_dir() <suite_dir>/linux/mkl/bin
+component_lib_dir() <suite_dir>/linux/mkl/lib/intel64
+-------------------------- --------------------------------------------------
+normalize_path('mpi') <suite_dir>/linux/mpi
+component_bin_dir('mpi') <suite_dir>/linux/mpi/intel64/bin
+component_lib_dir('mpi') <suite_dir>/linux/mpi/intel64/lib
+========================== ==================================================
+
+
+*********************************
+Analysis of directory layouts
+*********************************
+
+Let's look at some sample directory layouts, using ``ls -lF``,
+but focusing on names and symlinks only.
+
+Spack-born installation of ``intel-mkl@2018.1.163``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ $ ls -l <prefix>
+
+ bin/
+ - compilervars.*sh (symlinked) ONLY
+
+ compilers_and_libraries -> compilers_and_libraries_2018
+ - generically-named entry point, stable across versions (one hopes)
+
+ compilers_and_libraries_2018/
+ - vaguely-versioned dirname, holding a stub hierarchy --ignorable
+
+ $ ls -l compilers_and_libraries_2018/linux/
+ bin - actual compilervars.*sh (reg. files) ONLY
+ documentation -> ../../documentation_2018/
+ lib -> ../../compilers_and_libraries_2018.1.163/linux/compiler/lib/
+ mkl -> ../../compilers_and_libraries_2018.1.163/linux/mkl/
+ pkg_bin -> ../../compilers_and_libraries_2018.1.163/linux/bin/
+ samples -> ../../samples_2018/
+ tbb -> ../../compilers_and_libraries_2018.1.163/linux/tbb/
+
+ compilers_and_libraries_2018.1.163/
+ - Main "product" + a minimal set of libs from related products
+
+ $ ls -l compilers_and_libraries_2018.1.163/linux/
+ bin/ - compilervars.*sh, link_install*sh ONLY
+ mkl/ - Main Product ==> to be assigned to MKLROOT
+ compiler/ - lib/intel64_lin/libiomp5* ONLY
+ tbb/ - tbb/lib/intel64_lin/gcc4.[147]/libtbb*.so* ONLY
+
+ parallel_studio_xe_2018 -> parallel_studio_xe_2018.1.038/
+ parallel_studio_xe_2018.1.038/
+ - Alternate product packaging - ignorable
+
+ $ ls -l parallel_studio_xe_2018.1.038/
+ bin/ - actual psxevars.*sh (reg. files)
+ compilers_and_libraries_2018 -> <full_path>/comp...aries_2018.1.163
+ documentation_2018 -> <full_path_prefix>/documentation_2018
+ samples_2018 -> <full_path_prefix>/samples_2018
+ ...
+
+ documentation_2018/
+ samples_2018/
+ lib -> compilers_and_libraries/linux/lib/
+ mkl -> compilers_and_libraries/linux/mkl/
+ tbb -> compilers_and_libraries/linux/tbb/
+ - auxiliaries and convenience links
+
+Spack-external installation of Intel-MPI 2018
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For MPI, the layout is slightly different than MKL. The prefix will have to
+include an architecture directory (typically ``intel64``), which then contains
+bin/, lib/, ..., all without further architecture branching. The environment
+variable ``I_MPI_ROOT`` from the API documentation, however, must be the
+package's top directory, not including the architecture.
+
+FIXME: For MANPATH, need the parent dir.
+
+::
+
+ $ ls -lF /opt/intel/compilers_and_libraries_2018.1.163/linux/mpi/
+ bin64 -> intel64/bin/
+ etc64 -> intel64/etc/
+ include64 -> intel64/include/
+ lib64 -> intel64/lib/
+
+ benchmarks/
+ binding/
+ intel64/
+ man/
+ test/
+
+The package contains an MPI-2019 preview; Curiously, its release notes contain
+the tag: "File structure clean-up." I could not find further documentation on
+this, however, so it is unclear what, if any, changes will make it to release.
+
+https://software.intel.com/en-us/articles/restoring-legacy-path-structure-on-intel-mpi-library-2019
+
+::
+
+ $ ls -lF /opt/intel/compilers_and_libraries_2018.1.163/linux/mpi_2019/
+ binding/
+ doc/
+ imb/
+ intel64/
+ man/
+ test/
+
+Spack-external installation of Intel Parallel Studio 2018
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This is the main product bundle that I actually downloaded and installed on my
+system. Its nominal installation directory mostly holds merely symlinks
+to components installed in sibling dirs::
+
+ $ ls -lF /opt/intel/parallel_studio_xe_2018.1.038/
+ advisor_2018 -> /opt/intel/advisor_2018/
+ clck_2018 -> /opt/intel/clck/2018.1/
+ compilers_and_libraries_2018 -> /opt/intel/comp....aries_2018.1.163/
+ documentation_2018 -> /opt/intel/documentation_2018/
+ ide_support_2018 -> /opt/intel/ide_support_2018/
+ inspector_2018 -> /opt/intel/inspector_2018/
+ itac_2018 -> /opt/intel/itac/2018.1.017/
+ man -> /opt/intel/man/
+ samples_2018 -> /opt/intel/samples_2018/
+ vtune_amplifier_2018 -> /opt/intel/vtune_amplifier_2018/
+
+ psxevars.csh -> ./bin/psxevars.csh*
+ psxevars.sh -> ./bin/psxevars.sh*
+ bin/ - *vars.*sh scripts + sshconnectivity.exp ONLY
+
+ licensing/
+ uninstall*
+
+The only relevant regular files are ``*vars.*sh``, but those also just churn
+through the subordinate vars files of the components.
+
+Installation model
+~~~~~~~~~~~~~~~~~~~~
+
+Intel packages come with an ``install.sh`` script that is normally run
+interactively (in either text or GUI mode) but can run unattended with a
+``--silent <file>`` option, which is of course what Spack uses.
+
+Format of configuration file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The configuration file is conventionally called ``silent.cfg`` and has a simple
+``token=value`` syntax. Before using the configuration file, the installer
+calls ``<staging_dir>/pset/check.awk`` to validate it. Example paths to the
+validator are::
+
+ .../l_mkl_2018.1.163/pset/check.awk .
+ .../parallel_studio_xe_2018_update1_cluster_edition/pset/check.awk
+
+The tokens that are accepted in the configuration file vary between packages.
+Tokens not supported for a given package **will cause the installer to stop
+and fail.** This is particularly relevant for license-related tokens, which are
+accepted only for packages that actually require a license.
+
+Reference: [Intel's documentation](https://software.intel.com/en-us/articles/configuration-file-format)
+
+See also: https://software.intel.com/en-us/articles/silent-installation-guide-for-intel-parallel-studio-xe-composer-edition-for-os-x
+
+The following is from ``.../parallel_studio_xe_2018_update1_cluster_edition/pset/check.awk``:
+
+* Tokens valid for all packages encountered::
+
+ ACCEPT_EULA {accept, decline}
+ CONTINUE_WITH_OPTIONAL_ERROR {yes, no}
+ PSET_INSTALL_DIR {/opt/intel, , filepat}
+ CONTINUE_WITH_INSTALLDIR_OVERWRITE {yes, no}
+ COMPONENTS {ALL, DEFAULTS, , anythingpat}
+ PSET_MODE {install, repair, uninstall}
+ NONRPM_DB_DIR {, filepat}
+
+ SIGNING_ENABLED {yes, no}
+ ARCH_SELECTED {IA32, INTEL64, ALL}
+
+* Mentioned but unexplained in ``check.awk``::
+
+ NO_VALIDATE (?!)
+
+* Only for licensed packages::
+
+ ACTIVATION_SERIAL_NUMBER {, snpat}
+ ACTIVATION_LICENSE_FILE {, lspat, filepat}
+ ACTIVATION_TYPE {exist_lic, license_server,
+ license_file, trial_lic,
+
+ PHONEHOME_SEND_USAGE_DATA {yes, no}
+ serial_number}
+
+* Only for Amplifier (obviously)::
+
+ AMPLIFIER_SAMPLING_DRIVER_INSTALL_TYPE {build, kit}
+ AMPLIFIER_DRIVER_ACCESS_GROUP {, anythingpat, vtune}
+ AMPLIFIER_DRIVER_PERMISSIONS {, anythingpat, 666}
+ AMPLIFIER_LOAD_DRIVER {yes, no}
+ AMPLIFIER_C_COMPILER {, filepat, auto, none}
+ AMPLIFIER_KERNEL_SRC_DIR {, filepat, auto, none}
+ AMPLIFIER_MAKE_COMMAND {, filepat, auto, none}
+ AMPLIFIER_INSTALL_BOOT_SCRIPT {yes, no}
+ AMPLIFIER_DRIVER_PER_USER_MODE {yes, no}
+
+* Only for MKL and Studio::
+
+ CLUSTER_INSTALL_REMOTE {yes, no}
+ CLUSTER_INSTALL_TEMP {, filepat}
+ CLUSTER_INSTALL_MACHINES_FILE {, filepat}
+
+* "backward compatibility" (?)::
+
+ INSTALL_MODE {RPM, NONRPM}
+ download_only {yes}
+ download_dir {, filepat}
+
+
+Details for licensing tokens
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Quoted from
+https://software.intel.com/en-us/articles/configuration-file-format,
+for reference:
+
+[ed. note: As of 2018-05, the page incorrectly references ``ACTIVATION``, which
+was used only until about 2012; this is corrected to ``ACTIVATION_TYPE`` here.]
+
+ ...
+
+ ``ACTIVATION_TYPE=exist_lic``
+ This directive tells the install program to look for an existing
+ license during the install process. This is the preferred method for
+ silent installs. Take the time to register your serial number and get
+ a license file (see below). Having a license file on the system
+ simplifies the process. In addition, as an administrator it is good
+ practice to know WHERE your licenses are saved on your system.
+ License files are plain text files with a .lic extension. By default
+ these are saved in /opt/intel/licenses which is searched by default.
+ If you save your license elsewhere, perhaps under an NFS folder, set
+ environment variable **INTEL_LICENSE_FILE** to the full path to your
+ license file prior to starting the installation or use the
+ configuration file directive ``ACTIVATION_LICENSE_FILE`` to specify the
+ full pathname to the license file.
+
+ Options for ``ACTIVATION_TYPE`` are ``{ exist_lic, license_file, server_lic,
+ serial_number, trial_lic }``
+
+ ``exist_lic``
+ directs the installer to search for a valid license on the server.
+ Searches will utilize the environment variable **INTEL_LICENSE_FILE**,
+ search the default license directory /opt/intel/licenses, or use the
+ ``ACTIVATION_LICENSE_FILE`` directive to find a valid license file.
+
+ ``license_file``
+ is similar to exist_lic but directs the installer to use
+ ``ACTIVATION_LICENSE_FILE`` to find the license file.
+
+ ``server_lic``
+ is similar to exist_lic and exist_lic but directs the installer that
+ this is a client installation and a floating license server will be
+ contacted to active the product. This option will contact your
+ floating license server on your network to retrieve the license
+ information. BEFORE using this option make sure your client is
+ correctly set up for your network including all networking, routing,
+ name service, and firewall configuration. Insure that your client has
+ direct access to your floating license server and that firewalls are
+ set up to allow TCP/IP access for the 2 license server ports.
+ server_lic will use **INTEL_LICENSE_FILE** containing a port@host format
+ OR a client license file. The formats for these are described here
+ https://software.intel.com/en-us/articles/licensing-setting-up-the-client-floating-license
+
+ ``serial_number``
+ directs the installer to use directive ``ACTIVATION_SERIAL_NUMBER`` for
+ activation. This method will require the installer to contact an
+ external Intel activation server over the Internet to confirm your
+ serial number. Due to user and company firewalls, this method is more
+ complex and hence error prone of the available activation methods. We
+ highly recommend using a license file or license server for activation
+ instead.
+
+ ``trial_lic``
+ is used only if you do not have an existing license and intend to
+ temporarily evaluate the compiler. This method creates a temporary
+ trial license in Trusted Storage on your system.
+
+ ...
+
+*******************
+vars files
+*******************
+
+Intel's product packages contain a number of shell initialization files let's call them vars files.
+
+There are three kinds:
+
+#. Component-specific vars files, such as `mklvars` or `tbbvars`.
+#. Toplevel vars files such as "psxevars". They will scan for all
+ component-specific vars files associated with the product, and source them
+ if found.
+#. Symbolic links to either of them. Links may appear under a different name
+ for backward compatibility.
+
+At present, IntelPackage class is only concerned with the toplevel vars files,
+generally found in the product's toplevel bin/ directory.
+
+For reference, here is an overview of the names and locations of the vars files
+in the 2018 product releases, as seen for Spack-native installation. NB: May be
+incomplete as some components may have been omitted during installation.
+
+Names of vars files seen::
+
+ $ cd opt/spack/linux-centos6-x86_64
+ $ find intel* -name \*vars.sh -printf '%f\n' | sort -u | nl
+ 1 advixe-vars.sh
+ 2 amplxe-vars.sh
+ 3 apsvars.sh
+ 4 compilervars.sh
+ 5 daalvars.sh
+ 6 debuggervars.sh
+ 7 iccvars.sh
+ 8 ifortvars.sh
+ 9 inspxe-vars.sh
+ 10 ippvars.sh
+ 11 mklvars.sh
+ 12 mpivars.sh
+ 13 pstlvars.sh
+ 14 psxevars.sh
+ 15 sep_vars.sh
+ 16 tbbvars.sh
+
+Names and locations of vars files, sorted by Spack package name::
+
+ $ cd opt/spack/linux-centos6-x86_64
+ $ find intel* -name \*vars.sh -printf '%y\t%-15f\t%h\n' \
+ | cut -d/ -f1,4- \
+ | sed '/iccvars\|ifortvars/d; s,/,\t\t,; s,\.sh,,; s, */\(intel[/-]\),\1,' \
+ | sort -k3,3 -k2,2 \
+ | nl \
+ | awk '{printf "%6i %-2s %-16s %-24s %s\n", $1, $2, $3, $4, $5}'
+
+ --------------------------------------------------------------------------------------------------------
+ item no.
+ file or link
+ name of vars file
+ Spack package name
+ dir relative to Spack install dir
+ --------------------------------------------------------------------------------------------------------
+
+ 1 f mpivars intel compilers_and_libraries_2018.1.163/linux/mpi/intel64/bin
+ 2 f mpivars intel compilers_and_libraries_2018.1.163/linux/mpirt/bin/ia32_lin
+ 3 f tbbvars intel compilers_and_libraries_2018.1.163/linux/tbb/bin
+ 4 f pstlvars intel compilers_and_libraries_2018.1.163/linux/pstl/bin
+ 5 f compilervars intel compilers_and_libraries_2018.1.163/linux/bin
+ 6 f compilervars intel compilers_and_libraries_2018/linux/bin
+ 7 l compilervars intel bin
+ 8 f daalvars intel-daal compilers_and_libraries_2018.2.199/linux/daal/bin
+ 9 f psxevars intel-daal parallel_studio_xe_2018.2.046/bin
+ 10 l psxevars intel-daal parallel_studio_xe_2018.2.046
+ 11 f compilervars intel-daal compilers_and_libraries_2018.2.199/linux/bin
+ 12 f compilervars intel-daal compilers_and_libraries_2018/linux/bin
+ 13 l compilervars intel-daal bin
+ 14 f ippvars intel-ipp compilers_and_libraries_2018.2.199/linux/ipp/bin
+ 15 f psxevars intel-ipp parallel_studio_xe_2018.2.046/bin
+ 16 l psxevars intel-ipp parallel_studio_xe_2018.2.046
+ 17 f compilervars intel-ipp compilers_and_libraries_2018.2.199/linux/bin
+ 18 f compilervars intel-ipp compilers_and_libraries_2018/linux/bin
+ 19 l compilervars intel-ipp bin
+ 20 f mklvars intel-mkl compilers_and_libraries_2018.2.199/linux/mkl/bin
+ 21 f psxevars intel-mkl parallel_studio_xe_2018.2.046/bin
+ 22 l psxevars intel-mkl parallel_studio_xe_2018.2.046
+ 23 f compilervars intel-mkl compilers_and_libraries_2018.2.199/linux/bin
+ 24 f compilervars intel-mkl compilers_and_libraries_2018/linux/bin
+ 25 l compilervars intel-mkl bin
+ 26 f mpivars intel-mpi compilers_and_libraries_2018.2.199/linux/mpi_2019/intel64/bin
+ 27 f mpivars intel-mpi compilers_and_libraries_2018.2.199/linux/mpi/intel64/bin
+ 28 f psxevars intel-mpi parallel_studio_xe_2018.2.046/bin
+ 29 l psxevars intel-mpi parallel_studio_xe_2018.2.046
+ 30 f compilervars intel-mpi compilers_and_libraries_2018.2.199/linux/bin
+ 31 f compilervars intel-mpi compilers_and_libraries_2018/linux/bin
+ 32 l compilervars intel-mpi bin
+ 33 f apsvars intel-parallel-studio vtune_amplifier_2018.1.0.535340
+ 34 l apsvars intel-parallel-studio performance_snapshots_2018.1.0.535340
+ 35 f ippvars intel-parallel-studio compilers_and_libraries_2018.1.163/linux/ipp/bin
+ 36 f ippvars intel-parallel-studio composer_xe_2015.6.233/ipp/bin
+ 37 f mklvars intel-parallel-studio compilers_and_libraries_2018.1.163/linux/mkl/bin
+ 38 f mklvars intel-parallel-studio composer_xe_2015.6.233/mkl/bin
+ 39 f mpivars intel-parallel-studio compilers_and_libraries_2018.1.163/linux/mpi/intel64/bin
+ 40 f mpivars intel-parallel-studio compilers_and_libraries_2018.1.163/linux/mpirt/bin/ia32_lin
+ 41 f tbbvars intel-parallel-studio compilers_and_libraries_2018.1.163/linux/tbb/bin
+ 42 f tbbvars intel-parallel-studio composer_xe_2015.6.233/tbb/bin
+ 43 f daalvars intel-parallel-studio compilers_and_libraries_2018.1.163/linux/daal/bin
+ 44 f pstlvars intel-parallel-studio compilers_and_libraries_2018.1.163/linux/pstl/bin
+ 45 f psxevars intel-parallel-studio parallel_studio_xe_2018.1.038/bin
+ 46 l psxevars intel-parallel-studio parallel_studio_xe_2018.1.038
+ 47 f sep_vars intel-parallel-studio vtune_amplifier_2018.1.0.535340
+ 48 f sep_vars intel-parallel-studio vtune_amplifier_2018.1.0.535340/target/android_v4.1_x86_64
+ 49 f advixe-vars intel-parallel-studio advisor_2018.1.1.535164
+ 50 f amplxe-vars intel-parallel-studio vtune_amplifier_2018.1.0.535340
+ 51 f inspxe-vars intel-parallel-studio inspector_2018.1.1.535159
+ 52 f compilervars intel-parallel-studio compilers_and_libraries_2018.1.163/linux/bin
+ 53 f compilervars intel-parallel-studio compilers_and_libraries_2018/linux/bin
+ 54 l compilervars intel-parallel-studio bin
+ 55 f debuggervars intel-parallel-studio debugger_2018/bin
+
+
+********************
+MPI linkage
+********************
+
+
+Library selection
+~~~~~~~~~~~~~~~~~~~~~
+
+In the Spack code so far, the library selections for MPI are:
+
+::
+
+ libnames = ['libmpifort', 'libmpi']
+ if 'cxx' in self.spec.last_query.extra_parameters:
+ libnames = ['libmpicxx'] + libnames
+ return find_libraries(libnames,
+ root=self.component_lib_dir('mpi'),
+ shared=True, recursive=False)
+
+The problem is that there are multiple library versions under ``component_lib_dir``::
+
+ $ cd $I_MPI_ROOT
+ $ find . -name libmpi.so | sort
+ ./intel64/lib/debug/libmpi.so
+ ./intel64/lib/debug_mt/libmpi.so
+ ./intel64/lib/libmpi.so
+ ./intel64/lib/release/libmpi.so
+ ./intel64/lib/release_mt/libmpi.so
+
+"mt" refers to multi-threading, not in the explicit sense but in the sense of being thread-safe::
+
+ $ mpiifort -help | grep mt
+ -mt_mpi link the thread safe version of the Intel(R) MPI Library
+
+Well, why should we not inspect what the canonical script does? The wrapper
+has its own hardcoded "prefix=..." and can thus tell us what it will do, from a
+*wiped environment* no less!::
+
+ $ env - intel64/bin/mpiicc -show hello.c | ld-unwrap-args
+ icc 'hello.c' \
+ -I/opt/intel/compilers_and_libraries_2018.1.163/linux/mpi/intel64/include \
+ -L/opt/intel/compilers_and_libraries_2018.1.163/linux/mpi/intel64/lib/release_mt \
+ -L/opt/intel/compilers_and_libraries_2018.1.163/linux/mpi/intel64/lib \
+ -Xlinker --enable-new-dtags \
+ -Xlinker -rpath=/opt/intel/compilers_and_libraries_2018.1.163/linux/mpi/intel64/lib/release_mt \
+ -Xlinker -rpath=/opt/intel/compilers_and_libraries_2018.1.163/linux/mpi/intel64/lib \
+ -Xlinker -rpath=/opt/intel/mpi-rt/2017.0.0/intel64/lib/release_mt \
+ -Xlinker -rpath=/opt/intel/mpi-rt/2017.0.0/intel64/lib \
+ -lmpifort \
+ -lmpi \
+ -lmpigi \
+ -ldl \
+ -lrt \
+ -lpthread
+
+
+MPI Wrapper options
+~~~~~~~~~~~~~~~~~~~~~
+
+For reference, here's the wrapper's builtin help output::
+
+ $ mpiifort -help
+ Simple script to compile and/or link MPI programs.
+ Usage: mpiifort [options] <files>
+ ----------------------------------------------------------------------------
+ The following options are supported:
+ -fc=<name> | -f90=<name>
+ specify a FORTRAN compiler name: i.e. -fc=ifort
+ -echo print the scripts during their execution
+ -show show command lines without real calling
+ -config=<name> specify a configuration file: i.e. -config=ifort for mpif90-ifort.conf file
+ -v print version info of mpiifort and its native compiler
+ -profile=<name> specify a profile configuration file (an MPI profiling
+ library): i.e. -profile=myprofile for the myprofile.cfg file.
+ As a special case, lib<name>.so or lib<name>.a may be used
+ if the library is found
+ -check_mpi link against the Intel(R) Trace Collector (-profile=vtmc).
+ -static_mpi link the Intel(R) MPI Library statically
+ -mt_mpi link the thread safe version of the Intel(R) MPI Library
+ -ilp64 link the ILP64 support of the Intel(R) MPI Library
+ -no_ilp64 disable ILP64 support explicitly
+ -fast the same as -static_mpi + pass -fast option to a compiler.
+ -t or -trace
+ link against the Intel(R) Trace Collector
+ -trace-imbalance
+ link against the Intel(R) Trace Collector imbalance library
+ (-profile=vtim)
+ -dynamic_log link against the Intel(R) Trace Collector dynamically
+ -static use static linkage method
+ -nostrip turn off the debug information stripping during static linking
+ -O enable optimization
+ -link_mpi=<name>
+ link against the specified version of the Intel(R) MPI Library
+ All other options will be passed to the compiler without changing.
+ ----------------------------------------------------------------------------
+ The following environment variables are used:
+ I_MPI_ROOT the Intel(R) MPI Library installation directory path
+ I_MPI_F90 or MPICH_F90
+ the path/name of the underlying compiler to be used
+ I_MPI_FC_PROFILE or I_MPI_F90_PROFILE or MPIF90_PROFILE
+ the name of profile file (without extension)
+ I_MPI_COMPILER_CONFIG_DIR
+ the folder which contains configuration files *.conf
+ I_MPI_TRACE_PROFILE
+ specify a default profile for the -trace option
+ I_MPI_CHECK_PROFILE
+ specify a default profile for the -check_mpi option
+ I_MPI_CHECK_COMPILER
+ enable compiler setup checks
+ I_MPI_LINK specify the version of the Intel(R) MPI Library
+ I_MPI_DEBUG_INFO_STRIP
+ turn on/off the debug information stripping during static linking
+ I_MPI_FCFLAGS
+ special flags needed for compilation
+ I_MPI_LDFLAGS
+ special flags needed for linking
+ ----------------------------------------------------------------------------
+
+
+Side Note: MPI version divergence in 2015 release
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The package `intel-parallel-studio@cluster.2015.6` contains both a full MPI
+development version in `$prefix/impi` and an MPI Runtime under the
+`composer_xe*` suite directory. Curiously, these have *different versions*,
+with a release date nearly 1 year apart::
+
+ $ $SPACK_ROOT/...uaxaw7/impi/5.0.3.049/intel64/bin/mpiexec --version
+ Intel(R) MPI Library for Linux* OS, Version 5.0 Update 3 Build 20150804 (build id: 12452)
+ Copyright (C) 2003-2015, Intel Corporation. All rights reserved.
+
+ $ $SPACK_ROOT/...uaxaw7/composer_xe_2015.6.233/mpirt/bin/intel64/mpiexec --version
+ Intel(R) MPI Library for Linux* OS, Version 5.0 Update 1 Build 20140709
+ Copyright (C) 2003-2014, Intel Corporation. All rights reserved.
+
+I'm not sure what to make of it.
+
+
+**************
+macOS support
+**************
+
+- On macOS, the Spack methods here only include support to integrate an
+ externally installed MKL.
+
+- URLs in child packages will be Linux-specific; macOS download packages
+ are located in differently numbered dirs and are named m_*.dmg.
diff --git a/lib/spack/spack/build_systems/intel.py b/lib/spack/spack/build_systems/intel.py
index 3a67da7e09..0c5707a8ef 100644
--- a/lib/spack/spack/build_systems/intel.py
+++ b/lib/spack/spack/build_systems/intel.py
@@ -24,22 +24,79 @@
##############################################################################
import os
+import sys
+import glob
+import tempfile
+import re
+import inspect
import xml.etree.ElementTree as ET
+import llnl.util.tty as tty
-from llnl.util.filesystem import install
-from spack.package import PackageBase, run_after
+from llnl.util.filesystem import \
+ install, ancestor, filter_file, \
+ HeaderList, find_headers, \
+ LibraryList, find_libraries, find_system_libraries
+
+from spack.version import Version, ver
+from spack.package import PackageBase, run_after, InstallError
from spack.util.executable import Executable
+from spack.util.prefix import Prefix
+from spack.build_environment import dso_suffix
+from spack.environment import EnvironmentModifications
+
+
+# A couple of utility functions that might be useful in general. If so, they
+# should really be defined elsewhere, unless deemed heretical.
+# (Or na"ive on my part).
+
+def debug_print(msg, *args):
+ '''Prints a message (usu. a variable) and the callers' names for a couple
+ of stack frames.
+ '''
+ # https://docs.python.org/2/library/inspect.html#the-interpreter-stack
+ stack = inspect.stack()
+ _func_name = 3
+ tty.debug("%s.%s:\t%s" % (stack[2][_func_name], stack[1][_func_name], msg),
+ *args)
+
+
+def raise_lib_error(*args):
+ '''Bails out with an error message. Shows args after the first as one per
+ line, tab-indented, useful for long paths to line up and stand out.
+ '''
+ raise InstallError("\n\t".join(str(i) for i in args))
+
+def _expand_fields(s):
+ '''[Experimental] Expand arch-related fields in a string, typically a
+ filename.
-def _valid_components():
- """A generator that yields valid components."""
+ Supported fields and their typical expansions are::
- tree = ET.parse('pset/mediaconfig.xml')
- root = tree.getroot()
+ {platform} linux, mac
+ {arch} intel64 (including on Mac)
+ {libarch} intel64, empty on Mac
+ {bits} 64
- components = root.findall('.//Abbr')
- for component in components:
- yield component.text
+ '''
+ # Python-native string formatting requires arg list counts to match the
+ # replacement field count; optional fields are far easier with regexes.
+
+ _bits = '64'
+ _arch = 'intel64' # TBD: ia32
+
+ if 'linux' in sys.platform: # NB: linux2 vs. linux
+ s = re.sub('{platform}', 'linux', s)
+ s = re.sub('{libarch}', _arch, s)
+ elif 'darwin' in sys.platform:
+ s = re.sub('{platform}', 'mac', s)
+ s = re.sub('{libarch}', '', s) # no arch dirs are used (as of 2018)
+ # elif 'win' in sys.platform: # TBD
+ # s = re.sub('{platform}', 'windows', s)
+
+ s = re.sub('{arch}', _arch, s)
+ s = re.sub('{bits}', _bits, s)
+ return s
class IntelPackage(PackageBase):
@@ -51,7 +108,7 @@ class IntelPackage(PackageBase):
2. :py:meth:`~.IntelPackage.install`
They both have sensible defaults and for many packages the
- only thing necessary will be to override ``setup_environment``
+ only thing necessary will be to override setup_environment
to set the appropriate environment variables.
"""
#: Phases of an Intel package
@@ -61,15 +118,31 @@ class IntelPackage(PackageBase):
#: system base class
build_system_class = 'IntelPackage'
- #: By default, we assume that all Intel software requires a license.
- #: This can be overridden for packages that do not require a license.
- license_required = True
+ #: A dict that maps Spack version specs to release years, needed to infer
+ #: the installation directory layout for pre-2016 versions in the family of
+ #: Intel packages.
+ #
+ # Like any property, it can be overridden in client packages, should older
+ # versions ever be added there. The initial dict here contains the
+ # packages defined in Spack as of 2018-04. Keys could conceivably overlap
+ # but preferably should not - only the first key in hash traversal order
+ # that satisfies self.spec will be used.
+ version_years = {
+ # intel-daal is versioned 2016 and later, no divining is needed
+ 'intel-ipp@9.0:9.99': 2016,
+ 'intel-mkl@11.3.0:11.3.999': 2016,
+ 'intel-mpi@5.1:5.99': 2016,
+ }
- #: Comment symbol used in the ``license.lic`` file
- license_comment = '#'
+ @property
+ def license_required(self):
+ # The Intel libraries are provided without requiring a license as of
+ # version 2017.2. Trying to specify one anyway will fail. See:
+ # https://software.intel.com/en-us/articles/free-ipsxe-tools-and-libraries
+ return self._has_compilers or self.version < ver('2017.2')
- #: Location where Intel searches for a license file
- license_files = ['Licenses/license.lic']
+ #: Comment symbol used in the license.lic file
+ license_comment = '#'
#: Environment variables that Intel searches for a license file
license_vars = ['INTEL_LICENSE_FILE']
@@ -77,116 +150,1115 @@ class IntelPackage(PackageBase):
#: URL providing information on how to acquire a license key
license_url = 'https://software.intel.com/en-us/articles/intel-license-manager-faq'
- #: Components of the package to install.
- #: By default, install 'ALL' components.
- components = ['ALL']
+ #: Location where Intel searches for a license file
+ @property
+ def license_files(self):
+ dirs = ['Licenses']
+
+ if self._has_compilers:
+ dirs.append(self.component_bin_dir('compiler'))
+ for variant, component_suite_dir in {
+ '+advisor': 'advisor',
+ '+inspector': 'inspector',
+ '+itac': 'itac',
+ '+vtune': 'vtune_amplifier',
+ }.items():
+ if variant in self.spec:
+ dirs.append(self.normalize_path(
+ 'licenses', component_suite_dir, relative=True))
+
+ files = [os.path.join(d, 'license.lic') for d in dirs]
+ return files
+
+ #: Components to install (list of name patterns from pset/mediaconfig.xml)
+ # NB: Renamed from plain components() for coding and maintainability.
@property
- def _filtered_components(self):
- """Returns a list or set of valid components that match
- the requested components from ``components``."""
+ def pset_components(self):
+ # Do not detail single-purpose client packages.
+ if not self._has_compilers:
+ return ['ALL']
- # Don't filter 'ALL'
- if self.components == ['ALL']:
- return self.components
+ # tty.warn('DEBUG: installing ALL components')
+ # return ['ALL']
+
+ # Always include compilers and closely related components.
+ # Pre-2016 compiler components have different names - throw in all.
+ # Later releases have overlapping minor parts that differ by "edition".
+ # NB: The spack package 'intel' is a subset of
+ # 'intel-parallel-studio@composer' without the lib variants.
+ c = ' intel-icc intel-ifort' \
+ ' intel-ccomp intel-fcomp intel-comp-' \
+ ' intel-compilerproc intel-compilerprof intel-compilerpro-' \
+ ' intel-psxe intel-openmp'
+
+ additions_for = {
+ 'cluster': ' intel-icsxe',
+ 'professional': ' intel-ips-',
+ 'composer': ' intel-compxe',
+ }
+ if self._edition in additions_for:
+ c += additions_for[self._edition]
+
+ for variant, components_to_add in {
+ '+daal': ' intel-daal', # Data Analytics Acceleration Lib
+ '+gdb': ' intel-gdb', # Integrated Performance Primitives
+ '+ipp': ' intel-ipp intel-crypto-ipp',
+ '+mkl': ' intel-mkl', # Math Kernel Library
+ '+mpi': ' intel-mpi intel-imb', # MPI runtime, SDK, benchm.
+ '+tbb': ' intel-tbb', # Threading Building Blocks
+ '+advisor': ' intel-advisor',
+ '+clck': ' intel_clck', # Cluster Checker
+ '+inspector': ' intel-inspector',
+ '+itac': ' intel-itac intel-ta intel-tc'
+ ' intel-trace-analyzer intel-trace-collector',
+ # Trace Analyzer and Collector
+ '+vtune': ' intel-vtune-amplifier', # VTune
+ }.items():
+ if variant in self.spec:
+ c += components_to_add
+
+ debug_print(c)
+ return c.split()
+
+ # ---------------------------------------------------------------------
+ # Utilities
+ # ---------------------------------------------------------------------
+ @property
+ def _filtered_components(self):
+ '''Expands the list of desired component patterns to the exact names
+ present in the given download.
+ '''
+ c = self.pset_components
+ if 'ALL' in c or 'DEFAULTS' in c: # No filter needed
+ return c
# mediaconfig.xml is known to contain duplicate components.
# If more than one copy of the same component is used, you
# will get an error message about invalid components.
- # Use a set to store components to prevent duplicates.
- matches = set()
+ # Use sets to prevent duplicates and for efficient traversal.
+ requested = set(c)
+ confirmed = set()
+
+ # NB: To get a reasonable overview in pretty much the documented way:
+ #
+ # grep -E '<Product|<Abbr|<Name>..[a-z]' pset/mediaconfig.xml
+ #
+ # https://software.intel.com/en-us/articles/configuration-file-format
+ #
+ xmltree = ET.parse('pset/mediaconfig.xml')
+ for entry in xmltree.getroot().findall('.//Abbr'): # XPath expression
+ name_present = entry.text
+ for name_requested in requested:
+ if name_present.startswith(name_requested):
+ confirmed.add(name_present)
+
+ return list(confirmed)
+
+ @property
+ def intel64_int_suffix(self):
+ '''Provide the suffix for Intel library names to match a client
+ application's desired int size, conveyed by the active spec variant.
+ The possible suffixes and their meanings are:
+
+ ``ilp64`` all of int, long, and pointer are 64 bit,
+ `` lp64`` only long and pointer are 64 bit; int will be 32bit.
+ '''
+ if '+ilp64' in self.spec:
+ return 'ilp64'
+ else:
+ return 'lp64'
+
+ @property
+ def _has_compilers(self):
+ return self.name in ['intel', 'intel-parallel-studio']
+
+ @property
+ def _edition(self):
+ if self.name == 'intel-parallel-studio':
+ return self.version[0] # clearer than .up_to(1), I think.
+ elif self.name == 'intel':
+ return 'composer'
+ else:
+ return ''
+
+ @property
+ def version_yearlike(self):
+ '''Return the version in a unified style, suitable for Version class
+ conditionals.
+ '''
+ # Input data for this routine: self.version
+ # Returns: YYYY.Nupdate[.Buildseq]
+ #
+ # Specifics by package:
+ #
+ # Package Format of self.version
+ # ------------------------------------------------------------
+ # 'intel-parallel-studio' <edition>.YYYY.Nupdate
+ # 'intel' YY.0.Nupdate (some assigned ad-hoc)
+ # Recent lib packages YYYY.Nupdate.Buildseq
+ # Early lib packages Major.Minor.Patch.Buildseq
+ # ------------------------------------------------------------
+ #
+ # Package Output
+ # ------------------------------------------------------------
+ # 'intel-parallel-studio' YYYY.Nupdate
+ # 'intel' YYYY.Nupdate
+ # Recent lib packages YYYY.Nupdate.Buildseq
+ # Known early lib packages YYYY.Minor.Patch.Buildseq (*)
+ # Unknown early lib packages (2000 + Major).Minor.Patch.Buildseq
+ # ----------------------------------------------------------------
+ #
+ # (*) YYYY is taken from @property "version_years" (a dict of specs)
+ #
+ try:
+ if self.name == 'intel':
+ # Has a "Minor" version element, but it is always set as 0. To
+ # be useful for comparisons, drop it and get YYYY.Nupdate.
+ v_tail = self.version[2:] # coerced just fine via __getitem__
+ else:
+ v_tail = self.version[1:]
+ except IndexError:
+ # Hmm - this happens on "spack install intel-mkl@11".
+ # I thought concretization picks an actual version??
+ return self.version # give up
+
+ if self.name == 'intel-parallel-studio':
+ return v_tail
+
+ v_year = self.version[0]
+ if v_year < 2000:
+ # Shoehorn Major into release year until we know better.
+ v_year += 2000
+ for spec, year in self.version_years.items():
+ if self.spec.satisfies(spec):
+ v_year = year
+ break
+
+ return ver('%s.%s' % (v_year, v_tail))
+
+ # ---------------------------------------------------------------------
+ # Directory handling common to all Intel components
+ # ---------------------------------------------------------------------
+ # For reference: classes using IntelPackage, as of Spack-0.11:
+ #
+ # intel/ intel-ipp/ intel-mpi/
+ # intel-daal/ intel-mkl/ intel-parallel-studio/
+ #
+ # Not using class IntelPackage:
+ # intel-gpu-tools/ intel-mkl-dnn/ intel-tbb/
+ #
+ def normalize_suite_dir(self, suite_dir_name, version_globs=['*.*.*']):
+ '''Returns the version-specific and absolute path to the directory of
+ an Intel product or a suite of product components.
+
+ Parameters:
+
+ suite_dir_name (str):
+ Name of the product directory, without numeric version.
+
+ - Examples::
+
+ composer_xe, parallel_studio_xe, compilers_and_libraries
+
+ The following will work as well, even though they are not
+ directly targets for Spack installation::
+
+ advisor_xe, inspector_xe, vtune_amplifier_xe,
+ performance_snapshots (new name for vtune as of 2018)
+
+ These are single-component products without subordinate
+ components and are normally made available to users by a
+ toplevel psxevars.sh or equivalent file to source (and thus by
+ the modulefiles that Spack produces).
+
+ version_globs (list of str): Suffix glob patterns (most specific
+ first) expected to qualify suite_dir_name to its fully
+ version-specific install directory (as opposed to a
+ compatibility directory or symlink).
+ '''
+ # See ./README-intel.rst for background and analysis of dir layouts.
+
+ d = self.prefix
+
+ # Distinguish between product installations that were done external to
+ # Spack (integrated via packages.yaml) and Spack-internal ones. The
+ # resulting prefixes may differ in directory depth and specificity.
+ unversioned_dirname = ''
+ if suite_dir_name and suite_dir_name in d:
+ # If e.g. MKL was installed outside of Spack, it is likely just one
+ # product or product component among possibly many other Intel
+ # products and their releases that were installed in sibling or
+ # cousin directories. In such cases, the prefix given to Spack
+ # will inevitably be a highly product-specific and preferably fully
+ # version-specific directory. This is what we want and need, and
+ # nothing more specific than that, i.e., if needed, convert, e.g.:
+ # .../compilers_and_libraries*/* -> .../compilers_and_libraries*
+ d = re.sub('(%s%s.*?)%s.*' %
+ (os.sep, re.escape(suite_dir_name), os.sep), r'\1', d)
+
+ # The Intel installer scripts try hard to place compatibility links
+ # named like this in the install dir to convey upgrade benefits to
+ # traditional client apps. But such a generic name can be trouble
+ # when given to Spack: the link target is bound to change outside
+ # of Spack's purview and when it does, the outcome of subsequent
+ # builds of dependent packages may be affected. (Though Intel has
+ # been remarkably good at backward compatibility.)
+ # I'm not sure if Spack's package hashing includes link targets.
+ if d.endswith(suite_dir_name):
+ # NB: This could get tiresome without a seen++ test.
+ # tty.warn('Intel product found in a version-neutral directory'
+ # ' - future builds may not be reproducible.')
+ #
+ # Simply doing realpath() would not be enough, because:
+ # compilers_and_libraries -> compilers_and_libraries_2018
+ # which is mostly a staging directory for symlinks (see next).
+ unversioned_dirname = d
+ else:
+ # By contrast, a Spack-internal MKL installation will inherit its
+ # prefix from install.sh of Intel's package distribution, where it
+ # means the high-level installation directory that is specific to
+ # the *vendor* (think of the default "/opt/intel"). We must now
+ # step down into the *product* directory to get the usual
+ # hierarchy. But let's not do that in haste ...
+ #
+ # For a Spack-born install, the fully-qualified release directory
+ # desired above may seem less important since product upgrades
+ # won't land in the same parent. However, only the fully qualified
+ # directory contains the regular files for the compiler commands:
+ #
+ # $ ls -lF <HASH>/compilers_and_libraries*/linux/bin/intel64/icc
+ #
+ # <HASH>/compilers_and_libraries_2018.1.163/linux/bin/intel64/icc*
+ # A regular file in the actual release directory. Bingo!
+ #
+ # <HASH>/compilers_and_libraries_2018/linux/bin/intel64/icc -> ...
+ # A symlink - no good. Note that "compilers_and_libraries_2018/"
+ # is itself a directory (not symlink) but it merely holds a
+ # compatibility dir hierarchy with lots of symlinks into the
+ # release dir.
+ #
+ # <HASH>/compilers_and_libraries/linux/bin/intel64/icc -> ...
+ # Ditto.
+ #
+ # Now, the Spack packages for MKL and MPI packges use version
+ # triplets, but the one for intel-parallel-studio does not.
+ # So, we can't have it quite as easy as:
+ # d = Prefix(d.append('compilers_and_libraries_' + self.version))
+ # Alright, let's see what we can find instead:
+ unversioned_dirname = os.path.join(d, suite_dir_name)
+
+ if unversioned_dirname:
+ for g in version_globs:
+ try_glob = unversioned_dirname + g
+ debug_print('trying %s' % try_glob)
+
+ matching_dirs = sorted(glob.glob(try_glob))
+ # NB: Python glob() returns results in arbitrary order - ugh!
+ # NB2: sorted() is a shortcut that is NOT number-aware.
+
+ if matching_dirs:
+ debug_print('found %d:' % len(matching_dirs),
+ matching_dirs)
+ # Take the highest and thus presumably newest match, which
+ # better be the sole one anyway.
+ d = matching_dirs[-1]
+ break
+
+ if not matching_dirs:
+ # No match -- this *will* happen during pre-build call to
+ # setup_environment() when the destination dir is still empty.
+ # Return a sensible value anyway.
+ d = unversioned_dirname
+
+ debug_print(d)
+ return Prefix(d)
+
+ def normalize_path(self, component_path, component_suite_dir=None,
+ relative=False):
+ '''Returns the absolute or relative path to a component or file under a
+ component suite directory.
+
+ Intel's product names, scope, and directory layout changed over the
+ years. This function provides a unified interface to their directory
+ names.
+
+ Parameters:
+
+ component_path (str): a component name like 'mkl', or 'mpi', or a
+ deeper relative path.
+
+ component_suite_dir (str): _Unversioned_ name of the expected
+ parent directory of component_path. When absent or `None`, an
+ appropriate default will be used. A present but empty string
+ `""` requests that `component_path` refer to `self.prefix`
+ directly.
+
+ Typical values: `compilers_and_libraries`, `composer_xe`,
+ `parallel_studio_xe`.
+
+ Also supported: `advisor`, `inspector`, `vtune`. The actual
+ directory name for these suites varies by release year. The
+ name will be corrected as needed for use in the return value.
+
+ relative (bool): When True, return path relative to self.prefix,
+ otherwise, return an absolute path (the default).
+ '''
+ # Design note: Choosing the default for `component_suite_dir` was a bit
+ # tricky since there better be a sensible means to specify direct
+ # parentage under self.prefix (even though you normally shouldn't need
+ # a function for that). I chose "" to allow that case be represented,
+ # and 'None' or the absence of the kwarg to represent the most relevant
+ # case for the time of writing.
+ #
+ # In the 2015 releases (the earliest in Spack as of 2018), there were
+ # nominally two separate products that provided the compilers:
+ # "Composer" as lower tier, and "Parallel Studio" as upper tier. In
+ # Spack, we justifiably retcon both as "intel-parallel-studio@composer"
+ # and "...@cluster", respectively. Both of these use the older
+ # "composer_xe" dir layout, as do their virtual package personas.
+ #
+ # All other "intel-foo" packages in Spack as of 2018-04 use the
+ # "compilers_and_libraries" layout, including the 2016 releases that
+ # are not natively versioned by year.
+
+ cs = component_suite_dir
+ if cs is None and component_path.startswith('ism'):
+ cs = 'parallel_studio_xe'
+
+ v = self.version_yearlike
+
+ # Glob variants to complete component_suite_dir.
+ # Helper var for older MPI versions - those are reparented, with each
+ # version in their own version-named dir.
+ standalone_glob = '[1-9]*.*.*'
+
+ # Most other components; try most specific glob first.
+ # flake8 is far too opinionated about lists - ugh.
+ normalize_kwargs = {
+ 'version_globs': [
+ '_%s' % self.version,
+ '_%s.*' % v.up_to(2), # should be: YYYY.Nupdate
+ '_*.*.*', # last resort
+ ]
+ }
+ for rename_rule in [
+ # cs given as arg, in years, dir actually used, [version_globs]
+ [None, ':2015', 'composer_xe'],
+ [None, '2016:', 'compilers_and_libraries'],
+ ['advisor', ':2016', 'advisor_xe'],
+ ['inspector', ':2016', 'inspector_xe'],
+ ['vtune_amplifier', ':2017', 'vtune_amplifier_xe'],
+ ['vtune', ':2017', 'vtune_amplifier_xe'], # alt.
+ ['itac', ':', 'itac', [os.sep + standalone_glob]],
+ ]:
+ if cs == rename_rule[0] and v.satisfies(ver(rename_rule[1])):
+ cs = rename_rule[2]
+ if len(rename_rule) > 3:
+ normalize_kwargs = {'version_globs': rename_rule[3]}
+ break
+
+ d = self.normalize_suite_dir(cs, **normalize_kwargs)
+
+ # Help find components not located directly under d.
+ # NB: ancestor() not well suited if version_globs may contain os.sep .
+ parent_dir = re.sub(os.sep + re.escape(cs) + '.*', '', d)
+
+ reparent_as = {}
+ if cs == 'compilers_and_libraries': # must qualify further
+ d = os.path.join(d, _expand_fields('{platform}'))
+ elif cs == 'composer_xe':
+ reparent_as = {'mpi': 'impi'}
+ # ignore 'imb' (MPI Benchmarks)
+
+ for nominal_p, actual_p in reparent_as.items():
+ if component_path.startswith(nominal_p):
+ dirs = glob.glob(
+ os.path.join(parent_dir, actual_p, standalone_glob))
+ debug_print('reparent dirs: %s' % dirs)
+ # Brazenly assume last match is the most recent version;
+ # convert back to relative of parent_dir, and re-assemble.
+ rel_dir = dirs[-1].split(parent_dir + os.sep, 1)[-1]
+ component_path = component_path.replace(nominal_p, rel_dir, 1)
+ d = parent_dir
+
+ d = os.path.join(d, component_path)
+
+ if relative:
+ d = os.path.relpath(os.path.realpath(d), parent_dir)
+
+ debug_print(d)
+ return d
+
+ def component_bin_dir(self, component, **kwargs):
+ d = self.normalize_path(component, **kwargs)
+
+ if component == 'compiler': # bin dir is always under PARENT
+ d = os.path.join(ancestor(d), 'bin', _expand_fields('{libarch}'))
+ d = d.rstrip(os.sep) # cosmetics, when {libarch} is empty
+ # NB: Works fine even with relative=True, e.g.:
+ # composer_xe/compiler -> composer_xe/bin/intel64
+ elif component == 'mpi':
+ d = os.path.join(d, _expand_fields('{libarch}'), 'bin')
+ else:
+ d = os.path.join(d, 'bin')
+ debug_print(d)
+ return d
+
+ def component_lib_dir(self, component, **kwargs):
+ '''Provide directory suitable for find_libraries() and
+ SPACK_COMPILER_EXTRA_RPATHS.
+ '''
+ d = self.normalize_path(component, **kwargs)
+
+ if component == 'mpi':
+ d = os.path.join(d, _expand_fields('{libarch}'), 'lib')
+ else:
+ d = os.path.join(d, 'lib', _expand_fields('{libarch}'))
+ d = d.rstrip(os.sep) # cosmetics, when {libarch} is empty
+
+ if component == 'tbb': # must qualify further for abi
+ d = os.path.join(d, self._tbb_abi)
+
+ debug_print(d)
+ return d
+
+ def component_include_dir(self, component, **kwargs):
+ d = self.normalize_path(component, **kwargs)
+
+ if component == 'mpi':
+ d = os.path.join(d, _expand_fields('{libarch}'), 'include')
+ else:
+ d = os.path.join(d, 'include')
+
+ debug_print(d)
+ return d
+
+ @property
+ def file_to_source(self):
+ '''Full path of file to source for initializing an Intel package.
+ A client package could override as follows:
+ ` @property`
+ ` def file_to_source(self):`
+ ` return self.normalize_path("apsvars.sh", "vtune_amplifier")`
+ '''
+ vars_file_info_for = {
+ # key (usu. spack package name) -> [rel_path, component_suite_dir]
+ # Extension note: handle additions by Spack name or ad-hoc keys.
+ '@early_compiler': ['bin/compilervars', None],
+ 'intel-parallel-studio': ['bin/psxevars', 'parallel_studio_xe'],
+ 'intel': ['bin/compilervars', None],
+ 'intel-daal': ['daal/bin/daalvars', None],
+ 'intel-ipp': ['ipp/bin/ippvars', None],
+ 'intel-mkl': ['mkl/bin/mklvars', None],
+ 'intel-mpi': ['mpi/{libarch}/bin/mpivars', None],
+ }
+ key = self.name
+ if self.version_yearlike.satisfies(ver(':2015')):
+ # Same file as 'intel' but 'None' for component_suite_dir will
+ # resolve differently. Listed as a separate entry to serve as
+ # example and to avoid pitfalls upon possible refactoring.
+ key = '@early_compiler'
- for valid in _valid_components():
- for requested in self.components:
- if valid.startswith(requested):
- matches.add(valid)
+ f, component_suite_dir = vars_file_info_for[key]
+ f = _expand_fields(f) + '.sh'
+ # TODO?? win32 would have to handle os.sep, '.bat' (unless POSIX??)
- return matches
+ f = self.normalize_path(f, component_suite_dir)
+ return f
+ # ---------------------------------------------------------------------
+ # Threading, including (WIP) support for virtual 'tbb'
+ # ---------------------------------------------------------------------
+ @property
+ def openmp_libs(self):
+ '''Supply LibraryList for linking OpenMP'''
+
+ if '%intel' in self.spec:
+ # NB: Hunting down explicit library files may be the Spack way of
+ # doing things, but be aware that "{icc|ifort} --help openmp"
+ # steers us towards options instead: -qopenmp-link={dynamic,static}
+
+ omp_libnames = ['libiomp5']
+ omp_libs = find_libraries(
+ omp_libnames,
+ root=self.component_lib_dir('compiler'),
+ shared=('+shared' in self.spec))
+ # Note about search root here: For MKL, the directory
+ # "$MKLROOT/../compiler" will be present even for an MKL-only
+ # product installation (as opposed to one being ghosted via
+ # packages.yaml), specificially to provide the 'iomp5' libs.
+
+ elif '%gcc' in self.spec:
+ gcc = Executable(self.compiler.cc)
+ omp_lib_path = gcc(
+ '--print-file-name', 'libgomp.%s' % dso_suffix, output=str)
+ omp_libs = LibraryList(omp_lib_path)
+
+ if len(omp_libs) < 1:
+ raise_lib_error('Cannot locate OpenMP libraries:', omp_libnames)
+
+ debug_print(omp_libs)
+ return omp_libs
+
+ @property
+ def tbb_libs(self):
+ '''Supply LibraryList for linking TBB'''
+
+ # TODO: When is 'libtbbmalloc' needed?
+ tbb_lib = find_libraries(
+ ['libtbb'], root=self.component_lib_dir('tbb'))
+ # NB: Like icc with -qopenmp, so does icpc steer us towards using an
+ # option: "icpc -tbb"
+
+ # TODO: clang(?)
+ gcc = Executable('gcc') # must be gcc, not self.compiler.cc
+ cxx_lib_path = gcc(
+ '--print-file-name', 'libstdc++.%s' % dso_suffix, output=str)
+
+ libs = tbb_lib + LibraryList(cxx_lib_path)
+ debug_print(libs)
+ return libs
+
+ @property
+ def _tbb_abi(self):
+ '''Select the ABI needed for linking TBB'''
+ # Match the available gcc, as it's done in tbbvars.sh.
+ gcc = Executable('gcc')
+ matches = re.search(r'(gcc|LLVM).* ([0-9]+\.[0-9]+\.[0-9]+).*',
+ gcc('--version', output=str), re.I | re.M)
+ abi = ''
+ if sys.platform == 'darwin':
+ pass
+ elif matches:
+ # TODO: Confirm that this covers clang (needed on Linux only)
+ gcc_version = Version(matches.groups()[1])
+ if gcc_version >= ver('4.7'):
+ abi = 'gcc4.7'
+ elif gcc_version >= ver('4.4'):
+ abi = 'gcc4.4'
+ else:
+ abi = 'gcc4.1' # unlikely, one hopes.
+
+ # Alrighty then ...
+ debug_print(abi)
+ return abi
+
+ # ---------------------------------------------------------------------
+ # Support for virtual 'blas/lapack/scalapack'
+ # ---------------------------------------------------------------------
+ @property
+ def blas_libs(self):
+ # Main magic here.
+ # For reference, see The Intel Math Kernel Library Link Line Advisor:
+ # https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/
+
+ mkl_integer = 'libmkl_intel_' + self.intel64_int_suffix
+
+ if self.spec.satisfies('threads=openmp'):
+ if '%intel' in self.spec:
+ mkl_threading = 'libmkl_intel_thread'
+ elif '%gcc' in self.spec:
+ mkl_threading = 'libmkl_gnu_thread'
+ threading_engine_libs = self.openmp_libs()
+ elif self.spec.satisfies('threads=tbb'):
+ mkl_threading = 'libmkl_tbb_thread'
+ threading_engine_libs = self.tbb_libs()
+ elif self.spec.satisfies('threads=none'):
+ mkl_threading = 'libmkl_sequential'
+ threading_engine_libs = LibraryList([])
+ else:
+ raise_lib_error('Cannot determine MKL threading libraries.')
+
+ mkl_libnames = [mkl_integer, mkl_threading, 'libmkl_core']
+ mkl_libs = find_libraries(
+ mkl_libnames,
+ root=self.component_lib_dir('mkl'),
+ shared=('+shared' in self.spec))
+ debug_print(mkl_libs)
+
+ if len(mkl_libs) < 3:
+ raise_lib_error('Cannot locate core MKL libraries:', mkl_libnames)
+
+ # The Intel MKL link line advisor recommends these system libraries
+ system_libs = find_system_libraries(
+ 'libpthread libm libdl'.split(),
+ shared=('+shared' in self.spec))
+ debug_print(system_libs)
+
+ return mkl_libs + threading_engine_libs + system_libs
+
+ @property
+ def lapack_libs(self):
+ return self.blas_libs
+
+ @property
+ def scalapack_libs(self):
+ # Intel MKL does not directly depend on MPI but the BLACS library
+ # which underlies ScaLapack does. It comes in several personalities;
+ # we must supply a personality matching the MPI implementation that
+ # is active for the root package that asked for ScaLapack.
+ spec_root = self.spec.root
+ if sys.platform == 'darwin' and '^mpich' in spec_root:
+ # The only supported choice for MKL 2018 on Mac.
+ blacs_lib = 'libmkl_blacs_mpich'
+ elif '^openmpi' in spec_root:
+ blacs_lib = 'libmkl_blacs_openmpi'
+ elif '^mpich@1' in spec_root:
+ # Was supported only up to 2015.
+ blacs_lib = 'libmkl_blacs'
+ elif ('^mpich@2:' in spec_root or
+ '^mvapich2' in spec_root or
+ '^intel-mpi' in spec_root):
+ blacs_lib = 'libmkl_blacs_intelmpi'
+ elif '^mpt' in spec_root:
+ blacs_lib = 'libmkl_blacs_sgimpt'
+ else:
+ raise_lib_error('Cannot find a BLACS library for the given MPI.')
+
+ int_suff = '_' + self.intel64_int_suffix
+ scalapack_libnames = [
+ 'libmkl_scalapack' + int_suff,
+ blacs_lib + int_suff,
+ ]
+ sca_libs = find_libraries(
+ scalapack_libnames,
+ root=self.component_lib_dir('mkl'),
+ shared=('+shared' in self.spec))
+ debug_print(sca_libs)
+
+ if len(sca_libs) < 2:
+ raise_lib_error(
+ 'Cannot locate ScaLapack/BLACS libraries:', scalapack_libnames)
+ # NB: ScaLapack is installed as "cluster" components within MKL or
+ # MKL-encompassing products. But those were *optional* for the ca.
+ # 2015/2016 product releases, which was easy to overlook, and I have
+ # been bitten by that. Thus, complain early because it'd be a sore
+ # disappointment to have missing ScaLapack libs show up as a link error
+ # near the end phase of a client package's build phase.
+
+ return sca_libs
+
+ # ---------------------------------------------------------------------
+ # Support for virtual 'mpi'
+ # ---------------------------------------------------------------------
+ @property
+ def mpi_compiler_wrappers(self):
+ '''Return paths to compiler wrappers as a dict of env-like names
+ '''
+ # Intel comes with 2 different flavors of MPI wrappers:
+ #
+ # * mpiicc, mpiicpc, and mpiifort are hardcoded to wrap around
+ # the Intel compilers.
+ # * mpicc, mpicxx, mpif90, and mpif77 allow you to set which
+ # compilers to wrap using I_MPI_CC and friends. By default,
+ # wraps around the GCC compilers.
+ #
+ # In theory, these should be equivalent as long as I_MPI_CC
+ # and friends are set to point to the Intel compilers, but in
+ # practice, mpicc fails to compile some applications while
+ # mpiicc works.
+ bindir = self.component_bin_dir('mpi')
+ if self.compiler.name == 'intel':
+ wrapper_vars = {
+ # eschew Prefix objects -- emphasize the command strings.
+ 'MPICC': os.path.join(bindir, 'mpiicc'),
+ 'MPICXX': os.path.join(bindir, 'mpiicpc'),
+ 'MPIF77': os.path.join(bindir, 'mpiifort'),
+ 'MPIF90': os.path.join(bindir, 'mpiifort'),
+ 'MPIFC': os.path.join(bindir, 'mpiifort'),
+ }
+ else:
+ wrapper_vars = {
+ 'MPICC': os.path.join(bindir, 'mpicc'),
+ 'MPICXX': os.path.join(bindir, 'mpicxx'),
+ 'MPIF77': os.path.join(bindir, 'mpif77'),
+ 'MPIF90': os.path.join(bindir, 'mpif90'),
+ 'MPIFC': os.path.join(bindir, 'mpif90'),
+ }
+ # debug_print("wrapper_vars =", wrapper_vars)
+ return wrapper_vars
+
+ def mpi_setup_dependent_environment(
+ self, spack_env, run_env, dependent_spec, compilers_of_client={}):
+ '''Unified back-end for setup_dependent_environment() of Intel packages
+ that provide 'mpi'.
+
+ Parameters:
+
+ spack_env, run_env, dependent_spec: same as in
+ setup_dependent_environment().
+
+ compilers_of_client (dict): Conveys spack_cc, spack_cxx, etc.,
+ from the scope of dependent packages; constructed in caller.
+ '''
+ # See also: setup_dependent_package()
+ wrapper_vars = {
+ 'I_MPI_CC': compilers_of_client['CC'],
+ 'I_MPI_CXX': compilers_of_client['CXX'],
+ 'I_MPI_F77': compilers_of_client['F77'],
+ 'I_MPI_F90': compilers_of_client['F90'],
+ 'I_MPI_FC': compilers_of_client['FC'],
+ # NB: Normally set by the modulefile, but that is not active here:
+ 'I_MPI_ROOT': self.normalize_path('mpi'),
+ }
+
+ # CAUTION - SIMILAR code in:
+ # var/spack/repos/builtin/packages/mpich/package.py
+ # var/spack/repos/builtin/packages/openmpi/package.py
+ # var/spack/repos/builtin/packages/mvapich2/package.py
+ #
+ # On Cray, the regular compiler wrappers *are* the MPI wrappers.
+ if 'platform=cray' in self.spec:
+ # TODO: Confirm
+ wrapper_vars.update({
+ 'MPICC': compilers_of_client['CC'],
+ 'MPICXX': compilers_of_client['CXX'],
+ 'MPIF77': compilers_of_client['F77'],
+ 'MPIF90': compilers_of_client['F90'],
+ })
+ else:
+ compiler_wrapper_commands = self.mpi_compiler_wrappers
+ wrapper_vars.update({
+ 'MPICC': compiler_wrapper_commands['MPICC'],
+ 'MPICXX': compiler_wrapper_commands['MPICXX'],
+ 'MPIF77': compiler_wrapper_commands['MPIF77'],
+ 'MPIF90': compiler_wrapper_commands['MPIF90'],
+ })
+
+ for key, value in wrapper_vars.items():
+ spack_env.set(key, value)
+
+ debug_print("adding to spack_env:", wrapper_vars)
+
+ # ---------------------------------------------------------------------
+ # General support for child packages
+ # ---------------------------------------------------------------------
+ @property
+ def headers(self):
+ result = HeaderList([])
+ if '+mpi' in self.spec or self.provides('mpi'):
+ result += find_headers(
+ ['mpi'],
+ root=self.component_include_dir('mpi'),
+ recursive=False)
+ if '+mkl' in self.spec or self.provides('mkl'):
+ result += find_headers(
+ ['mkl_cblas', 'mkl_lapacke'],
+ root=self.component_include_dir('mkl'),
+ recursive=False)
+ debug_print(result)
+ return result
+
+ @property
+ def libs(self):
+ result = LibraryList([])
+ if '+mpi' in self.spec or self.provides('mpi'):
+ # If prefix is too general, recursive searches may get files from
+ # supported but inappropriate sub-architectures like 'mic'.
+ libnames = ['libmpifort', 'libmpi']
+ if 'cxx' in self.spec.last_query.extra_parameters:
+ libnames = ['libmpicxx'] + libnames
+ result += find_libraries(
+ libnames,
+ root=self.component_lib_dir('mpi'),
+ shared=True, recursive=True)
+
+ # NB: MKL uses domain-specifics: blas_libs/lapack_libs/scalapack_libs
+
+ debug_print(result)
+ return result
+
+ def setup_environment(self, spack_env, run_env):
+ """Adds environment variables to the generated module file.
+
+ These environment variables come from running:
+
+ .. code-block:: console
+
+ $ source parallel_studio_xe_2017/bin/psxevars.sh intel64
+ [and likewise for MKL, MPI, and other components]
+ """
+ # https://spack.readthedocs.io/en/latest/spack.html#spack.package.PackageBase.setup_environment
+ #
+ # spack_env -> Applied when dependent is built within Spack.
+ # Not used here.
+ # run_env -> Applied to the modulefile of dependent.
+ #
+ # NOTE: Spack runs setup_environment twice, once pre-build to set up
+ # the build environment, and once post-installation to determine
+ # the environment variables needed at run-time to add to the module
+ # file. The script we need to source is only present post-installation,
+ # so check for its existence before sourcing.
+ # TODO: At some point we should split setup_environment into
+ # setup_build_environment and setup_run_environment to get around
+ # this problem.
+ f = self.file_to_source
+ if not f or not os.path.isfile(f):
+ return
+
+ tty.debug("sourcing " + f)
+
+ # All Intel packages expect at least the architecture as argument.
+ # Some accept more args, but those are not (yet?) handled here.
+ args = (_expand_fields('{arch}'),)
+
+ # On Mac, the platform is *also required*, at least as of 2018.
+ # I am not sure about earlier versions.
+ # if sys.platform == 'darwin':
+ # args = ()
+
+ run_env.extend(EnvironmentModifications.from_sourcing_file(f, *args))
+
+ def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
+ # https://spack.readthedocs.io/en/latest/spack.html#spack.package.PackageBase.setup_dependent_environment
+ #
+ # spack_env -> Applied when dependent is built within Spack.
+ # run_env -> Applied to the modulefile of dependent.
+ # Not used here.
+ #
+ # NB: This function is overwritten by 'mpi' provider packages:
+ #
+ # var/spack/repos/builtin/packages/intel-mpi/package.py
+ # var/spack/repos/builtin/packages/intel-parallel-studio/package.py
+ #
+ # They call _setup_dependent_env_callback() as well, but with the
+ # dictionary kwarg compilers_of_client{} present and populated.
+
+ # Handle everything in a callback version.
+ self._setup_dependent_env_callback(spack_env, run_env, dependent_spec)
+
+ def _setup_dependent_env_callback(
+ self, spack_env, run_env, dependent_spec, compilers_of_client={}):
+ # Expected to be called from a client's setup_dependent_environment(),
+ # with args extended to convey the client's compilers as needed.
+
+ if '+mkl' in self.spec or self.provides('mkl'):
+ # Spack's env philosophy demands that we replicate some of the
+ # settings normally handled by file_to_source ...
+ #
+ # TODO: Why is setup_environment() [which uses file_to_source()]
+ # not called as a matter of course upon entering the current
+ # function? (guarding against multiple calls notwithstanding)
+ #
+ # Use a local dict to facilitate debug_print():
+ env_mods = {
+ 'MKLROOT': self.normalize_path('mkl'),
+ 'SPACK_COMPILER_EXTRA_RPATHS': self.component_lib_dir('mkl'),
+ }
+
+ spack_env.set('MKLROOT', env_mods['MKLROOT'])
+ spack_env.append_path('SPACK_COMPILER_EXTRA_RPATHS',
+ env_mods['SPACK_COMPILER_EXTRA_RPATHS'])
+
+ debug_print("adding/modifying spack_env:", env_mods)
+
+ if '+mpi' in self.spec or self.provides('mpi'):
+ if compilers_of_client:
+ self.mpi_setup_dependent_environment(
+ spack_env, run_env, dependent_spec, compilers_of_client)
+ # We could forego this nonce function and inline its code here,
+ # but (a) it sisters mpi_compiler_wrappers() [needed twice]
+ # which performs dizzyingly similar but necessarily different
+ # actions, and (b) function code leaves a bit more breathing
+ # room within the suffocating corset of flake8 line length.
+ else:
+ raise InstallError('compilers_of_client arg required for MPI')
+
+ def setup_dependent_package(self, module, dep_spec):
+ # https://spack.readthedocs.io/en/latest/spack.html#spack.package.PackageBase.setup_dependent_package
+ # Reminder: "module" refers to Python module.
+ # Called before the install() method of dependents.
+
+ if '+mpi' in self.spec or self.provides('mpi'):
+ compiler_wrapper_commands = self.mpi_compiler_wrappers
+ self.spec.mpicc = compiler_wrapper_commands['MPICC']
+ self.spec.mpicxx = compiler_wrapper_commands['MPICXX']
+ self.spec.mpif77 = compiler_wrapper_commands['MPIF77']
+ self.spec.mpifc = compiler_wrapper_commands['MPIFC']
+ debug_print(("spec '%s' received .mpi* properties:" % self.spec),
+ compiler_wrapper_commands)
+
+ # ---------------------------------------------------------------------
+ # Specifics for installation phase
+ # ---------------------------------------------------------------------
@property
def global_license_file(self):
- """Returns the path where a global license file should be stored.
+ """Returns the path where a Spack-global license file should be stored.
All Intel software shares the same license, so we store it in a
common 'intel' directory."""
- return os.path.join(self.global_license_dir, 'intel',
- os.path.basename(self.license_files[0]))
+ return os.path.join(self.global_license_dir, 'intel', 'license.lic')
+
+ @property
+ def _determine_license_type(self):
+ '''Provide appropriate license tokens for the installer (silent.cfg).
+ '''
+ # See:
+ # ./README-intel.rst, section "Details for licensing tokens".
+ # ./build_systems/README-intel.rst, section "Licenses"
+ #
+ # Ideally, we just tell the installer to look around on the system.
+ # Thankfully, we neither need to care nor emulate where it looks:
+ license_type = {'ACTIVATION_TYPE': 'exist_lic', }
+
+ # However (and only), if the spack-internal Intel license file has been
+ # populated beyond its templated explanatory comments, proffer it to
+ # the installer instead:
+ f = self.global_license_file
+ if os.path.isfile(f):
+ # The file will have been created upon self.license_required AND
+ # self.license_files having been populated, so the "if" is usually
+ # true by the time the present function runs; ../hooks/licensing.py
+ with open(f) as fh:
+ if re.search(r'^[ \t]*[^' + self.license_comment + '\n]',
+ fh.read(), re.MULTILINE):
+ license_type = {
+ 'ACTIVATION_TYPE': 'license_file',
+ 'ACTIVATION_LICENSE_FILE': f,
+ }
+
+ debug_print(license_type)
+ return license_type
def configure(self, spec, prefix):
- """Writes the ``silent.cfg`` file used to configure the installation.
+ '''Generates the silent.cfg file to pass to installer.sh.
See https://software.intel.com/en-us/articles/configuration-file-format
- """
- # Patterns used to check silent configuration file
- #
- # anythingpat - any string
- # filepat - the file location pattern (/path/to/license.lic)
- # lspat - the license server address pattern (0123@hostname)
- # snpat - the serial number pattern (ABCD-01234567)
- config = {
- # Accept EULA, valid values are: {accept, decline}
- 'ACCEPT_EULA': 'accept',
+ '''
- # Optional error behavior, valid values are: {yes, no}
- 'CONTINUE_WITH_OPTIONAL_ERROR': 'yes',
-
- # Install location, valid values are: {/opt/intel, filepat}
- 'PSET_INSTALL_DIR': prefix,
+ # Both tokens AND values of the configuration file are validated during
+ # the run of the underlying binary installer. Any unknown token or
+ # unacceptable value will cause that installer to fail. Notably, this
+ # applies to trying to specify a license for a product that does not
+ # require one.
+ #
+ # Fortunately, the validator is a script from a solid code base that is
+ # only lightly adapted to the token vocabulary of each product and
+ # release. Let's get that script so we can preempt its objections.
+ #
+ # Rather than running the script on a trial file and dissecting its
+ # pronouncements, let's brazenly skim it for supported tokens and build
+ # our configuration accordingly. We can do this because the tokens are
+ # quite long and specific.
- # Continue with overwrite of existing installation directory,
- # valid values are: {yes, no}
- 'CONTINUE_WITH_INSTALLDIR_OVERWRITE': 'yes',
+ validator_code = open('pset/check.awk', 'r').read()
+ # Let's go a little further and distill the tokens (plus some noise).
+ tokenlike_words = set(re.findall(r'[A-Z_]{4,}', validator_code))
- # List of components to install,
- # valid values are: {ALL, DEFAULTS, anythingpat}
- 'COMPONENTS': ';'.join(self._filtered_components),
+ # NB: .cfg files generated with the "--duplicate filename" option have
+ # the COMPONENTS string begin with a separator - do not worry about it.
+ components_joined = ';'.join(self._filtered_components)
+ nonrpm_db_dir = os.path.join(prefix, 'nonrpm-db')
- # Installation mode, valid values are: {install, repair, uninstall}
- 'PSET_MODE': 'install',
+ config_draft = {
+ # Basics first - these should be accepted in all products.
+ 'ACCEPT_EULA': 'accept',
+ 'PSET_MODE': 'install',
+ 'CONTINUE_WITH_OPTIONAL_ERROR': 'yes',
+ 'CONTINUE_WITH_INSTALLDIR_OVERWRITE': 'yes',
+ 'SIGNING_ENABLED': 'no',
- # Directory for non-RPM database, valid values are: {filepat}
- 'NONRPM_DB_DIR': prefix,
+ # Highly variable package specifics:
+ 'PSET_INSTALL_DIR': prefix,
+ 'NONRPM_DB_DIR': nonrpm_db_dir,
+ 'COMPONENTS': components_joined,
- # Perform validation of digital signatures of RPM files,
- # valid values are: {yes, no}
- 'SIGNING_ENABLED': 'no',
+ # Conditional tokens; the first is supported post-2015 only.
+ # Ignore ia32; most recent products don't even provide it.
+ 'ARCH_SELECTED': 'INTEL64', # was: 'ALL'
- # Select target architecture of your applications,
- # valid values are: {IA32, INTEL64, ALL}
- 'ARCH_SELECTED': 'ALL',
+ # 'ism' component -- see uninstall_ism(); also varies by release.
+ 'PHONEHOME_SEND_USAGE_DATA': 'no',
+ # Ah, as of 2018.2, that somewhat loaded term got replaced by one
+ # in business-speak. We uphold our preference, both out of general
+ # principles and for technical reasons like overhead and non-routed
+ # compute nodes.
+ 'INTEL_SW_IMPROVEMENT_PROGRAM_CONSENT': 'no',
}
+ # Deal with licensing only if truly needed.
+ # NB: Token was 'ACTIVATION' pre ~2013, so basically irrelevant here.
+ if 'ACTIVATION_TYPE' in tokenlike_words:
+ config_draft.update(self._determine_license_type)
- # Not all Intel software requires a license. Trying to specify
- # one anyway will cause the installation to fail.
- if self.license_required:
- config.update({
- # License file or license server,
- # valid values are: {lspat, filepat}
- 'ACTIVATION_LICENSE_FILE': self.global_license_file,
-
- # Activation type, valid values are: {exist_lic,
- # license_server, license_file, trial_lic, serial_number}
- 'ACTIVATION_TYPE': 'license_file',
-
- # Intel(R) Software Improvement Program opt-in,
- # valid values are: {yes, no}
- 'PHONEHOME_SEND_USAGE_DATA': 'no',
- })
-
- with open('silent.cfg', 'w') as cfg:
- for key in config:
- cfg.write('{0}={1}\n'.format(key, config[key]))
+ # Write sorted *by token* so the file looks less like a hash dump.
+ f = open('silent.cfg', 'w')
+ for token, value in sorted(config_draft.items()):
+ if token in tokenlike_words:
+ f.write('%s=%s\n' % (token, value))
+ f.close()
def install(self, spec, prefix):
- """Runs the ``install.sh`` installation script."""
+ '''Runs Intel's install.sh installation script. Afterwards, save the
+ installer config and logs to <prefix>/.spack
+ '''
+ # prepare
+ tmpdir = tempfile.mkdtemp(prefix='spack-intel-')
install_script = Executable('./install.sh')
+ install_script.add_default_env('TMPDIR', tmpdir)
+
+ # perform
install_script('--silent', 'silent.cfg')
+ # preserve config and logs
+ dst = os.path.join(self.prefix, '.spack')
+ install('silent.cfg', dst)
+ for f in glob.glob('%s/intel*log' % tmpdir):
+ install(f, dst)
+
+ @run_after('install')
+ def configure_rpath(self):
+ if '+rpath' not in self.spec:
+ return
+
+ # https://software.intel.com/en-us/cpp-compiler-18.0-developer-guide-and-reference-using-configuration-files
+ compilers_bin_dir = self.component_bin_dir('compiler')
+ compilers_lib_dir = self.component_lib_dir('compiler')
+
+ for compiler_name in 'icc icpc ifort'.split():
+ f = os.path.join(compilers_bin_dir, compiler_name)
+ if not os.path.isfile(f):
+ raise InstallError(
+ 'Cannot find compiler command to configure rpath:\n\t' + f)
+
+ compiler_cfg = os.path.abspath(f + '.cfg')
+ with open(compiler_cfg, 'w') as fh:
+ fh.write('-Xlinker -rpath={0}\n'.format(compilers_lib_dir))
+
@run_after('install')
- def save_silent_cfg(self):
- """Copies the silent.cfg configuration file to ``<prefix>/.spack``."""
- install('silent.cfg', os.path.join(self.prefix, '.spack'))
+ def filter_compiler_wrappers(self):
+ if (('+mpi' in self.spec or self.provides('mpi')) and
+ '~newdtags' in self.spec):
+ bin_dir = self.component_bin_dir('mpi')
+ for f in 'mpif77 mpif90 mpigcc mpigxx mpiicc mpiicpc ' \
+ 'mpiifort'.split():
+ f = os.path.join(bin_dir, f)
+ filter_file('-Xlinker --enable-new-dtags', ' ', f, string=True)
+
+ @run_after('install')
+ def uninstall_ism(self):
+ # The "Intel(R) Software Improvement Program" [ahem] gets installed,
+ # apparently regardless of PHONEHOME_SEND_USAGE_DATA.
+ #
+ # https://software.intel.com/en-us/articles/software-improvement-program
+ # https://software.intel.com/en-us/forums/intel-c-compiler/topic/506959
+ # Hubert H. (Intel) Mon, 03/10/2014 - 03:02 wrote:
+ # "... you can also uninstall the Intel(R) Software Manager
+ # completely: <installdir>/intel/ism/uninstall.sh"
+
+ f = os.path.join(self.normalize_path('ism'), 'uninstall.sh')
+ if os.path.isfile(f):
+ tty.warn('Uninstalling "Intel Software Improvement Program"'
+ 'component')
+ uninstall = Executable(f)
+ uninstall('--silent')
+
+ # TODO? also try
+ # ~/intel/ism/uninstall --silent
+
+ debug_print(os.getcwd())
+ return
# Check that self.prefix is there after installation
run_after('install')(PackageBase.sanity_check_prefix)