summaryrefslogtreecommitdiff
path: root/lib
diff options
context:
space:
mode:
authorElizabeth Fischer <rpf2116@columbia.edu>2016-10-05 16:00:27 -0400
committerTodd Gamblin <tgamblin@llnl.gov>2016-10-05 13:00:27 -0700
commit015e29efe105ddd039e8b395e12cf78a3787ebb3 (patch)
treeef4f44f11d8547dc7c3905d01f1428d2a375f2b3 /lib
parentabc9412f23dd3b131fc1126b8fd03ca49f6fd56d (diff)
downloadspack-015e29efe105ddd039e8b395e12cf78a3787ebb3.tar.gz
spack-015e29efe105ddd039e8b395e12cf78a3787ebb3.tar.bz2
spack-015e29efe105ddd039e8b395e12cf78a3787ebb3.tar.xz
spack-015e29efe105ddd039e8b395e12cf78a3787ebb3.zip
Documentation Improvements for SC16 (#1676)
* Transferred pending changes from efischer/develop * 1. Rewrite of "Getting Started": everything you need to set up Spack, even on old/ornery systems. This is not a reference manual section; items covered here are covered more systematically elsewhere in the manual. Some sections were moved here from elsewhere. 2. Beginning to write three methods of application developer support. Two methods were moved from elsewhere. * Edits... * Moved sections in preparation for additional text to be added from old efischer/docs branch. * Moved 2 more sections. * Avoid accid * Applied proofreading edits from @adamjstewart * Fixed non-standard section characters. * Moved section on profiling to the developer's guide. * Still working on Spack workflows... * Finished draft of packaging_guide.rst * Renamed sample projects. * Updates to docstrings * Added documentation to resolve #638 (content taken from #846) * Added section on resolving inconsistent run dependencies. Addresses #645 * Showed how to build Python extensions only compatible with certain versions of Python. * Added examples of getting the right behavior from depends_on(). See #1035 * Added section on Intel compilers and their GCC masquerading feature. Addresses #638, #1687. * Fixed formatting * Added fixes to filesystem views. Added a caveats section to ``spack setup``. * Updated section on Intel compiler configuration because compiler flags currently do not work (see #1687) * Defined trusted downloads, and updated text based on them. (See #1696) * Added workflow to deal with buggy upstream software. See #1683 * Added proper separation between Spack Docs vs. Reference Manual * Renamed spack_workflows to workflows. Resolves a conflict with the .gitignore file. * Removed repeated section. * Created new "Vendor Specific Compiler Configuration" section and organized existing Intel section into it. Added new PGI and NAG sections; but they need to be expanded / rewritten based on the existing text plus research through Spack issues on GitHub. * Fixed text on `spack load --dependencies` to conform to reality. See #1662 * Added patching as option for upstream bugfixes. * Added section on using licensed compilers. * Added section on non-downloadable tarballs. * Wrote sections on NAG and PGI. Arranged compilers in alphabetical order. * Fix indent. * Fixed typos. * Clarified dependency types. * Applied edits from Adam J. Stewart. Spellchecked workflows and getting_started. * Removed spurious header * Fixed Sphinx errors * Fixed erroneous symbol in docstring. * Fix many typos and formatting problems. * Spacing changes * Added section on fixing Git problems. See #1779 * Fixed signature of install() method. * Addressed system packages in greater detail. See #1794 #1795 * Fixed typos * Fixed quotes * Duplicate section on Spack profiling removed from configuration.rst. It had earlier been moved to developer_guide.rst, where it fits better. * Minor edits - Tweak supported platform language. - Various small changes to the new getting started guide. * Fixed bug with quotes.
Diffstat (limited to 'lib')
-rw-r--r--lib/spack/docs/basic_usage.rst701
-rw-r--r--lib/spack/docs/case_studies.rst181
-rw-r--r--lib/spack/docs/configuration.rst83
-rw-r--r--lib/spack/docs/developer_guide.rst24
-rw-r--r--lib/spack/docs/getting_started.rst1080
-rw-r--r--lib/spack/docs/index.rst25
-rw-r--r--lib/spack/docs/packaging_guide.rst308
-rw-r--r--lib/spack/docs/workflows.rst1208
-rw-r--r--lib/spack/spack/directives.py5
-rw-r--r--lib/spack/spack/package.py9
10 files changed, 2757 insertions, 867 deletions
diff --git a/lib/spack/docs/basic_usage.rst b/lib/spack/docs/basic_usage.rst
index fd701789ec..a5377415c7 100644
--- a/lib/spack/docs/basic_usage.rst
+++ b/lib/spack/docs/basic_usage.rst
@@ -230,6 +230,54 @@ but you risk breaking other installed packages. In general, it is safer to
remove dependent packages *before* removing their dependencies or use the
``--dependents`` option.
+
+.. _nondownloadable:
+
+^^^^^^^^^^^^^^^^^^^^^^^^^
+Non-Downloadable Tarballs
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The tarballs for some packages cannot be automatically downloaded by
+Spack. This could be for a number of reasons:
+
+#. The author requires users to manually accept a license agreement
+ before downloading (``jdk`` and ``galahad``).
+
+#. The software is proprietary and cannot be downloaded on the open
+ Internet.
+
+To install these packages, one must create a mirror and manually add
+the tarballs in question to it (see :ref:`mirrors`):
+
+#. Create a directory for the mirror. You can create this directory
+ anywhere you like, it does not have to be inside ``~/.spack``:
+
+ .. code-block:: console
+
+ $ mkdir ~/.spack/manual_mirror
+
+#. Register the mirror with Spack by creating ``~/.spack/mirrors.yaml``:
+
+ .. code-block:: yaml
+
+ mirrors:
+ manual: file:///home/me/.spack/manual_mirror
+
+#. Put your tarballs in it. Tarballs should be named
+ ``<package>/<package>-<version>.tar.gz``. For example:
+
+ .. code-block:: console
+
+ $ ls -l manual_mirror/galahad
+
+ -rw-------. 1 me me 11657206 Jun 21 19:25 galahad-2.60003.tar.gz
+
+#. Install as usual:
+
+ .. code-block:: console
+
+ $ spack install galahad
+
-------------------------
Seeing installed packages
-------------------------
@@ -382,175 +430,6 @@ with the 'debug' compile-time option enabled.
The full spec syntax is discussed in detail in :ref:`sec-specs`.
-.. _compiler-config:
-
-----------------------
-Compiler configuration
-----------------------
-
-Spack has the ability to build packages with multiple compilers and
-compiler versions. Spack searches for compilers on your machine
-automatically the first time it is run. It does this by inspecting
-your ``PATH``.
-
-.. _spack-compilers:
-
-^^^^^^^^^^^^^^^^^^^
-``spack compilers``
-^^^^^^^^^^^^^^^^^^^
-
-You can see which compilers spack has found by running ``spack
-compilers`` or ``spack compiler list``:
-
-.. code-block:: console
-
- $ spack compilers
- ==> Available compilers
- -- gcc ---------------------------------------------------------
- gcc@4.9.0 gcc@4.8.0 gcc@4.7.0 gcc@4.6.2 gcc@4.4.7
- gcc@4.8.2 gcc@4.7.1 gcc@4.6.3 gcc@4.6.1 gcc@4.1.2
- -- intel -------------------------------------------------------
- intel@15.0.0 intel@14.0.0 intel@13.0.0 intel@12.1.0 intel@10.0
- intel@14.0.3 intel@13.1.1 intel@12.1.5 intel@12.0.4 intel@9.1
- intel@14.0.2 intel@13.1.0 intel@12.1.3 intel@11.1
- intel@14.0.1 intel@13.0.1 intel@12.1.2 intel@10.1
- -- clang -------------------------------------------------------
- clang@3.4 clang@3.3 clang@3.2 clang@3.1
- -- pgi ---------------------------------------------------------
- pgi@14.3-0 pgi@13.2-0 pgi@12.1-0 pgi@10.9-0 pgi@8.0-1
- pgi@13.10-0 pgi@13.1-1 pgi@11.10-0 pgi@10.2-0 pgi@7.1-3
- pgi@13.6-0 pgi@12.8-0 pgi@11.1-0 pgi@9.0-4 pgi@7.0-6
-
-Any of these compilers can be used to build Spack packages. More on
-how this is done is in :ref:`sec-specs`.
-
-.. _spack-compiler-add:
-
-^^^^^^^^^^^^^^^^^^^^^^
-``spack compiler add``
-^^^^^^^^^^^^^^^^^^^^^^
-
-An alias for ``spack compiler find``.
-
-.. _spack-compiler-find:
-
-^^^^^^^^^^^^^^^^^^^^^^^
-``spack compiler find``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-If you do not see a compiler in this list, but you want to use it with
-Spack, you can simply run ``spack compiler find`` with the path to
-where the compiler is installed. For example:
-
-.. code-block:: console
-
- $ spack compiler find /usr/local/tools/ic-13.0.079
- ==> Added 1 new compiler to /Users/gamblin2/.spack/compilers.yaml
- intel@13.0.079
-
-Or you can run ``spack compiler find`` with no arguments to force
-auto-detection. This is useful if you do not know where compilers are
-installed, but you know that new compilers have been added to your
-``PATH``. For example, using dotkit, you might do this:
-
-.. code-block:: console
-
- $ module load gcc-4.9.0
- $ spack compiler find
- ==> Added 1 new compiler to /Users/gamblin2/.spack/compilers.yaml
- gcc@4.9.0
-
-This loads the environment module for gcc-4.9.0 to add it to
-``PATH``, and then it adds the compiler to Spack.
-
-.. _spack-compiler-info:
-
-^^^^^^^^^^^^^^^^^^^^^^^
-``spack compiler info``
-^^^^^^^^^^^^^^^^^^^^^^^
-
-If you want to see specifics on a particular compiler, you can run
-``spack compiler info`` on it:
-
-.. code-block:: console
-
- $ spack compiler info intel@15
- intel@15.0.0:
- cc = /usr/local/bin/icc-15.0.090
- cxx = /usr/local/bin/icpc-15.0.090
- f77 = /usr/local/bin/ifort-15.0.090
- fc = /usr/local/bin/ifort-15.0.090
- modules = []
- operating system = centos6
-
-This shows which C, C++, and Fortran compilers were detected by Spack.
-Notice also that we didn't have to be too specific about the
-version. We just said ``intel@15``, and information about the only
-matching Intel compiler was displayed.
-
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Manual compiler configuration
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-If auto-detection fails, you can manually configure a compiler by
-editing your ``~/.spack/compilers.yaml`` file. You can do this by running
-``spack config edit compilers``, which will open the file in your ``$EDITOR``.
-
-Each compiler configuration in the file looks like this:
-
-.. code-block:: yaml
-
- compilers:
- - compiler:
- modules = []
- operating_system: centos6
- paths:
- cc: /usr/local/bin/icc-15.0.024-beta
- cxx: /usr/local/bin/icpc-15.0.024-beta
- f77: /usr/local/bin/ifort-15.0.024-beta
- fc: /usr/local/bin/ifort-15.0.024-beta
- spec: intel@15.0.0:
-
-For compilers, like ``clang``, that do not support Fortran, put
-``None`` for ``f77`` and ``fc``:
-
-.. code-block:: yaml
-
- paths:
- cc: /usr/bin/clang
- cxx: /usr/bin/clang++
- f77: None
- fc: None
- spec: clang@3.3svn:
-
-Once you save the file, the configured compilers will show up in the
-list displayed by ``spack compilers``.
-
-You can also add compiler flags to manually configured compilers. The
-valid flags are ``cflags``, ``cxxflags``, ``fflags``, ``cppflags``,
-``ldflags``, and ``ldlibs``. For example:
-
-.. code-block:: yaml
-
- compilers:
- - compiler:
- modules = []
- operating_system: OS
- paths:
- cc: /usr/local/bin/icc-15.0.024-beta
- cxx: /usr/local/bin/icpc-15.0.024-beta
- f77: /usr/local/bin/ifort-15.0.024-beta
- fc: /usr/local/bin/ifort-15.0.024-beta
- parameters:
- cppflags: -O3 -fPIC
- spec: intel@15.0.0:
-
-These flags will be treated by spack as if they were enterred from
-the command line each time this compiler is used. The compiler wrappers
-then inject those flags into the compiler command. Compiler flags
-enterred from the command line will be discussed in more detail in the
-following section.
-
.. _sec-specs:
--------------------
@@ -945,51 +824,17 @@ versions are now filtered out.
Integration with module systems
-------------------------------
-.. note::
-
- Environment module support is currently experimental and should not
- be considered a stable feature of Spack. In particular, the
- interface and/or generated module names may change in future
- versions.
-
-Spack provides some integration with
-`Environment Modules <http://modules.sourceforge.net/>`__
-and `Dotkit <https://computing.llnl.gov/?set=jobs&page=dotkit>`_ to make
-it easier to use the packages it installed.
-
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Installing Environment Modules
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Spack provides some integration with `Environment Modules
+<http://modules.sourceforge.net/>`_ to make it easier to use the
+packages it installs. If your system does not already have
+Environment Modules, see :ref:`InstallEnvironmentModules`.
-In order to use Spack's generated environment modules, you must have
-installed the *Environment Modules* package. On many Linux
-distributions, this can be installed from the vendor's repository:
-
-.. code-block:: sh
-
- $ yum install environment-modules # (Fedora/RHEL/CentOS)
- $ apt-get install environment-modules # (Ubuntu/Debian)
-
-If your Linux distribution does not have
-Environment Modules, you can get it with Spack:
-
-.. code-block:: console
-
- $ spack install environment-modules
-
-In this case to activate it automatically you need to add the following two
-lines to your ``.bashrc`` profile (or similar):
-
-.. code-block:: sh
-
- MODULES_HOME=`spack location -i environment-modules`
- source ${MODULES_HOME}/Modules/init/bash
-
-If you use a Unix shell other than ``bash``, modify the commands above
-accordingly and source the appropriate file in
-``${MODULES_HOME}/Modules/init/``.
+.. note::
-.. TODO : Add a similar section on how to install dotkit ?
+ Spack also supports `Dotkit
+ <https://computing.llnl.gov/?set=jobs&page=dotkit>`_, which is used
+ by some systems. If you system does not already have a module
+ system installed, you should use Environment Modules or LMod.
^^^^^^^^^^^^^^^^^^^^^^^^
Spack and module systems
@@ -1196,9 +1041,36 @@ of module files:
"""Set up the compile and runtime environments for a package."""
pass
-"""""""""""""""""
+.. code-block:: python
+
+ def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
+ """Set up the environment of packages that depend on this one"""
+ pass
+
+As briefly stated in the comments, the first method lets you customize the
+module file content for the package you are currently writing, the second
+allows for modifications to your dependees module file. In both cases one
+needs to fill ``run_env`` with the desired list of environment modifications.
+
+""""""""""""""""""""""""""""""""""""""""""""""""
+Example : ``builtin/packages/python/package.py``
+""""""""""""""""""""""""""""""""""""""""""""""""
+
+The ``python`` package that comes with the ``builtin`` Spack repository
+overrides ``setup_dependent_environment`` in the following way:
+
+.. code-block:: python
+
+ def setup_dependent_environment(self, spack_env, run_env, extension_spec):
+ if extension_spec.package.extends(self.spec):
+ run_env.prepend_path('PYTHONPATH', os.path.join(extension_spec.prefix, self.site_packages_dir))
+
+to insert the appropriate ``PYTHONPATH`` modifications in the module
+files of python packages.
+
+^^^^^^^^^^^^^^^^^
Recursive Modules
-"""""""""""""""""
+^^^^^^^^^^^^^^^^^
In some cases, it is desirable to load not just a module, but also all
the modules it depends on. This is not required for most modules
@@ -1207,18 +1079,30 @@ packages use RPATH to find their dependencies: this can be true in
particular for Python extensions, which are currently *not* built with
RPATH.
-Modules may be loaded recursively with the ``load`` command's
-``--dependencies`` or ``-r`` argument:
+Scripts to load modules recursively may be made with the command:
.. code-block:: console
- $ spack load --dependencies <spec> ...
+ $ spack module loads --dependencies <spec>
+
+An equivalent alternative is:
+
+.. code-block :: console
+
+ $ source <( spack module loads --dependencies <spec> )
+
+.. warning::
+
+ The ``spack load`` command does not currently accept the
+ ``--dependencies`` flag. Use ``spack module loads`` instead, for
+ now.
-More than one spec may be placed on the command line here.
+.. See #1662
-"""""""""""""""""""""""""""""""""
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Module Commands for Shell Scripts
-"""""""""""""""""""""""""""""""""
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Although Spack is flexible, the ``module`` command is much faster.
This could become an issue when emitting a series of ``spack load``
@@ -1228,75 +1112,64 @@ cut-and-pasted into a shell script. For example:
.. code-block:: console
- $ spack module find tcl --dependencies --shell py-numpy git
- # bzip2@1.0.6%gcc@4.9.3=linux-x86_64
- module load bzip2-1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx
- # ncurses@6.0%gcc@4.9.3=linux-x86_64
- module load ncurses-6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv
- # zlib@1.2.8%gcc@4.9.3=linux-x86_64
- module load zlib-1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z
- # sqlite@3.8.5%gcc@4.9.3=linux-x86_64
- module load sqlite-3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr
- # readline@6.3%gcc@4.9.3=linux-x86_64
- module load readline-6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3
- # python@3.5.1%gcc@4.9.3=linux-x86_64
- module load python-3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi
- # py-setuptools@20.5%gcc@4.9.3=linux-x86_64
- module load py-setuptools-20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2
- # py-nose@1.3.7%gcc@4.9.3=linux-x86_64
- module load py-nose-1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli
- # openblas@0.2.17%gcc@4.9.3+shared=linux-x86_64
- module load openblas-0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y
- # py-numpy@1.11.0%gcc@4.9.3+blas+lapack=linux-x86_64
- module load py-numpy-1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r
- # curl@7.47.1%gcc@4.9.3=linux-x86_64
- module load curl-7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi
- # autoconf@2.69%gcc@4.9.3=linux-x86_64
- module load autoconf-2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4
- # cmake@3.5.0%gcc@4.9.3~doc+ncurses+openssl~qt=linux-x86_64
- module load cmake-3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t
- # expat@2.1.0%gcc@4.9.3=linux-x86_64
- module load expat-2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd
- # git@2.8.0-rc2%gcc@4.9.3+curl+expat=linux-x86_64
- module load git-2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd
+ $ spack module loads --dependencies py-numpy git
+ # bzip2@1.0.6%gcc@4.9.3=linux-x86_64
+ module load bzip2-1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx
+ # ncurses@6.0%gcc@4.9.3=linux-x86_64
+ module load ncurses-6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv
+ # zlib@1.2.8%gcc@4.9.3=linux-x86_64
+ module load zlib-1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z
+ # sqlite@3.8.5%gcc@4.9.3=linux-x86_64
+ module load sqlite-3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr
+ # readline@6.3%gcc@4.9.3=linux-x86_64
+ module load readline-6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3
+ # python@3.5.1%gcc@4.9.3=linux-x86_64
+ module load python-3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi
+ # py-setuptools@20.5%gcc@4.9.3=linux-x86_64
+ module load py-setuptools-20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2
+ # py-nose@1.3.7%gcc@4.9.3=linux-x86_64
+ module load py-nose-1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli
+ # openblas@0.2.17%gcc@4.9.3+shared=linux-x86_64
+ module load openblas-0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y
+ # py-numpy@1.11.0%gcc@4.9.3+blas+lapack=linux-x86_64
+ module load py-numpy-1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r
+ # curl@7.47.1%gcc@4.9.3=linux-x86_64
+ module load curl-7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi
+ # autoconf@2.69%gcc@4.9.3=linux-x86_64
+ module load autoconf-2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4
+ # cmake@3.5.0%gcc@4.9.3~doc+ncurses+openssl~qt=linux-x86_64
+ module load cmake-3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t
+ # expat@2.1.0%gcc@4.9.3=linux-x86_64
+ module load expat-2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd
+ # git@2.8.0-rc2%gcc@4.9.3+curl+expat=linux-x86_64
+ module load git-2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd
The script may be further edited by removing unnecessary modules.
-This script may be directly executed in bash via:
-.. code-block:: sh
- source < (spack module find tcl --dependencies --shell py-numpy git)
+^^^^^^^^^^^^^^^
+Module Prefixes
+^^^^^^^^^^^^^^^
-^^^^^^^^^^^^^^^^^^^^^^^^^
-Regenerating Module files
-^^^^^^^^^^^^^^^^^^^^^^^^^
+On some systems, modules are automatically prefixed with a certain
+string; ``spack module loads`` needs to know about that prefix when it
+issues ``module load`` commands. Add the ``--prefix`` option to your
+``spack module loads`` commands if this is necessary.
-.. code-block:: python
+For example, consider the following on one system:
- def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
- """Set up the environment of packages that depend on this one"""
- pass
-
-As briefly stated in the comments, the first method lets you customize the
-module file content for the package you are currently writing, the second
-allows for modifications to your dependees module file. In both cases one
-needs to fill ``run_env`` with the desired list of environment modifications.
-
-""""""""""""""""""""""""""""""""""""""""""""""""
-Example : ``builtin/packages/python/package.py``
-""""""""""""""""""""""""""""""""""""""""""""""""
-
-The ``python`` package that comes with the ``builtin`` Spack repository
-overrides ``setup_dependent_environment`` in the following way:
+..code-block:: console
-.. code-block:: python
+ $ module avail
+ linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
- def setup_dependent_environment(self, spack_env, run_env, extension_spec):
- if extension_spec.package.extends(self.spec):
- run_env.prepend_path('PYTHONPATH', os.path.join(extension_spec.prefix, self.site_packages_dir))
+ $ spack module loads antlr # WRONG!
+ # antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
+ module load antlr-2.7.7-gcc-5.3.0-bdpl46y
-to insert the appropriate ``PYTHONPATH`` modifications in the module
-files of python packages.
+ $ spack module loads --prefix linux-SuSE11-x86_64/ antlr
+ # antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
^^^^^^^^^^^^^^^^^^^
Configuration files
@@ -1461,23 +1334,14 @@ load two or more versions of the same software at the same time.
The ``conflict`` option is ``tcl`` specific
^^^^^^^^^^^^^^^^^^^^^^^^^
-Regenerating module files
+Regenerating Module files
^^^^^^^^^^^^^^^^^^^^^^^^^
-Sometimes you may need to regenerate the modules files. For example,
-if newer, fancier module support is added to Spack at some later date,
-you may want to regenerate all the modules to take advantage of these
-new features.
-
-.. _spack-module:
-
-""""""""""""""""""""""""
-``spack module refresh``
-""""""""""""""""""""""""
-
-Running ``spack module refresh`` will remove the
-``share/spack/modules`` and ``share/spack/dotkit`` directories, then
-regenerate all module and dotkit files from scratch:
+Module and dotkit files are generated when packages are installed, and
+are placed in the directory ``share/spack/modules`` under the Spack
+root. The command ``spack refresh`` will regenerate them all without
+re-building the packages; for example, if module format or options
+have changed:
.. code-block:: console
@@ -1485,117 +1349,6 @@ regenerate all module and dotkit files from scratch:
==> Regenerating tcl module files.
==> Regenerating dotkit module files.
-----------------
-Filesystem Views
-----------------
-
-.. Maybe this is not the right location for this documentation.
-
-The Spack installation area allows for many package installation trees
-to coexist and gives the user choices as to what versions and variants
-of packages to use. To use them, the user must rely on a way to
-aggregate a subset of those packages. The section on Environment
-Modules gives one good way to do that which relies on setting various
-environment variables. An alternative way to aggregate is through
-**filesystem views**.
-
-A filesystem view is a single directory tree which is the union of the
-directory hierarchies of the individual package installation trees
-that have been included. The files of the view's installed packages
-are brought into the view by symbolic or hard links back to their
-location in the original Spack installation area. As the view is
-formed, any clashes due to a file having the exact same path in its
-package installation tree are handled in a first-come-first-served
-basis and a warning is printed. Packages and their dependencies can
-be both added and removed. During removal, empty directories will be
-purged. These operations can be limited to pertain to just the
-packages listed by the user or to exclude specific dependencies and
-they allow for software installed outside of Spack to coexist inside
-the filesystem view tree.
-
-By its nature, a filesystem view represents a particular choice of one
-set of packages among all the versions and variants that are available
-in the Spack installation area. It is thus equivalent to the
-directory hiearchy that might exist under ``/usr/local``. While this
-limits a view to including only one version/variant of any package, it
-provides the benefits of having a simpler and traditional layout which
-may be used without any particular knowledge that its packages were
-built by Spack.
-
-Views can be used for a variety of purposes including:
-
-* A central installation in a traditional layout, eg ``/usr/local`` maintained over time by the sysadmin.
-* A self-contained installation area which may for the basis of a top-level atomic versioning scheme, eg ``/opt/pro`` vs ``/opt/dev``.
-* Providing an atomic and monolithic binary distribution, eg for delivery as a single tarball.
-* Producing ephemeral testing or developing environments.
-
-^^^^^^^^^^^^^^^^^^^^^^
-Using Filesystem Views
-^^^^^^^^^^^^^^^^^^^^^^
-
-A filesystem view is created and packages are linked in by the ``spack
-view`` command's ``symlink`` and ``hardlink`` sub-commands. The
-``spack view remove`` command can be used to unlink some or all of the
-filesystem view.
-
-The following example creates a filesystem view based
-on an installed ``cmake`` package and then removes from the view the
-files in the ``cmake`` package while retaining its dependencies.
-
-.. code-block:: console
-
- $ spack view --verbose symlink myview cmake@3.5.2
- ==> Linking package: "ncurses"
- ==> Linking package: "zlib"
- ==> Linking package: "openssl"
- ==> Linking package: "cmake"
-
- $ ls myview/
- bin doc etc include lib share
-
- $ ls myview/bin/
- captoinfo clear cpack ctest infotocap openssl tabs toe tset
- ccmake cmake c_rehash infocmp ncurses6-config reset tic tput
-
- $ spack view --verbose --dependencies false rm myview cmake@3.5.2
- ==> Removing package: "cmake"
-
- $ ls myview/bin/
- captoinfo c_rehash infotocap openssl tabs toe tset
- clear infocmp ncurses6-config reset tic tput
-
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Limitations of Filesystem Views
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-This section describes some limitations that should be considered in
-using filesystems views.
-
-Filesystem views are merely organizational. The binary executable
-programs, shared libraries and other build products found in a view
-are mere links into the "real" Spack installation area. If a view is
-built with symbolic links it requires the Spack-installed package to
-be kept in place. Building a view with hardlinks removes this
-requirement but any internal paths (eg, rpath or ``#!`` interpreter
-specifications) will still require the Spack-installed package files
-to be in place.
-
-.. FIXME: reference the relocation work of Hegner and Gartung.
-
-As described above, when a view is built only a single instance of a
-file may exist in the unified filesystem tree. If more than one
-package provides a file at the same path (relative to its own root)
-then it is the first package added to the view that "wins". A warning
-is printed and it is up to the user to determine if the conflict
-matters.
-
-It is up to the user to assure a consistent view is produced. In
-particular if the user excludes packages, limits the following of
-dependencies or removes packages the view may become inconsistent. In
-particular, if two packages require the same sub-tree of dependencies,
-removing one package (recursively) will remove its dependencies and
-leave the other package broken.
-
.. _extensions:
---------------------------
@@ -1864,144 +1617,6 @@ This issue typically manifests with the error below:
A nicer error message is TBD in future versions of Spack.
-.. _cray-support:
-
--------------
-Spack on Cray
--------------
-
-Spack differs slightly when used on a Cray system. The architecture spec
-can differentiate between the front-end and back-end processor and operating system.
-For example, on Edison at NERSC, the back-end target processor
-is "Ivy Bridge", so you can specify to use the back-end this way:
-
-.. code-block:: console
-
- $ spack install zlib target=ivybridge
-
-You can also use the operating system to build against the back-end:
-
-.. code-block:: console
-
- $ spack install zlib os=CNL10
-
-Notice that the name includes both the operating system name and the major
-version number concatenated together.
-
-Alternatively, if you want to build something for the front-end,
-you can specify the front-end target processor. The processor for a login node
-on Edison is "Sandy bridge" so we specify on the command line like so:
-
-.. code-block:: console
-
- $ spack install zlib target=sandybridge
-
-And the front-end operating system is:
-
-.. code-block:: console
-
- $ spack install zlib os=SuSE11
-
-^^^^^^^^^^^^^^^^^^^^^^^
-Cray compiler detection
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Spack can detect compilers using two methods. For the front-end, we treat
-everything the same. The difference lies in back-end compiler detection.
-Back-end compiler detection is made via the Tcl module avail command.
-Once it detects the compiler it writes the appropriate PrgEnv and compiler
-module name to compilers.yaml and sets the paths to each compiler with Cray\'s
-compiler wrapper names (i.e. cc, CC, ftn). During build time, Spack will load
-the correct PrgEnv and compiler module and will call appropriate wrapper.
-
-The compilers.yaml config file will also differ. There is a
-modules section that is filled with the compiler's Programming Environment
-and module name. On other systems, this field is empty []:
-
-.. code-block:: yaml
-
- - compiler:
- modules:
- - PrgEnv-intel
- - intel/15.0.109
-
-As mentioned earlier, the compiler paths will look different on a Cray system.
-Since most compilers are invoked using cc, CC and ftn, the paths for each
-compiler are replaced with their respective Cray compiler wrapper names:
-
-.. code-block:: yaml
-
- paths:
- cc: cc
- cxx: CC
- f77: ftn
- fc: ftn
-
-As opposed to an explicit path to the compiler executable. This allows Spack
-to call the Cray compiler wrappers during build time.
-
-For more on compiler configuration, check out :ref:`compiler-config`.
-
-Spack sets the default Cray link type to dynamic, to better match other
-other platforms. Individual packages can enable static linking (which is the
-default outside of Spack on cray systems) using the ``-static`` flag.
-
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Setting defaults and using Cray modules
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-If you want to use default compilers for each PrgEnv and also be able
-to load cray external modules, you will need to set up a ``packages.yaml``.
-
-Here's an example of an external configuration for cray modules:
-
-.. code-block:: yaml
-
- packages:
- mpi:
- modules:
- mpich@7.3.1%gcc@5.2.0 arch=cray_xc-haswell-CNL10: cray-mpich
- mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-haswell-CNL10: cray-mpich
-
-This tells Spack that for whatever package that depends on mpi, load the
-cray-mpich module into the environment. You can then be able to use whatever
-environment variables, libraries, etc, that are brought into the environment
-via module load.
-
-You can set the default compiler that Spack can use for each compiler type.
-If you want to use the Cray defaults, then set them under ``all:`` in packages.yaml.
-In the compiler field, set the compiler specs in your order of preference.
-Whenever you build with that compiler type, Spack will concretize to that version.
-
-Here is an example of a full packages.yaml used at NERSC
-
-.. code-block:: yaml
-
- packages:
- mpi:
- modules:
- mpich@7.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-mpich
- mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-SuSE11-ivybridge: cray-mpich
- buildable: False
- netcdf:
- modules:
- netcdf@4.3.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-netcdf
- netcdf@4.3.3.1%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge: cray-netcdf
- buildable: False
- hdf5:
- modules:
- hdf5@1.8.14%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-hdf5
- hdf5@1.8.14%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge: cray-hdf5
- buildable: False
- all:
- compiler: [gcc@5.2.0, intel@16.0.0.109]
-
-Here we tell spack that whenever we want to build with gcc use version 5.2.0 or
-if we want to build with intel compilers, use version 16.0.0.109. We add a spec
-for each compiler type for each cray modules. This ensures that for each
-compiler on our system we can use that external module.
-
-For more on external packages check out the section :ref:`sec-external_packages`.
------------
Getting Help
diff --git a/lib/spack/docs/case_studies.rst b/lib/spack/docs/case_studies.rst
deleted file mode 100644
index fcec636c27..0000000000
--- a/lib/spack/docs/case_studies.rst
+++ /dev/null
@@ -1,181 +0,0 @@
-=======================================
-Using Spack for CMake-based Development
-=======================================
-
-These are instructions on how to use Spack to aid in the development
-of a CMake-based project. Spack is used to help find the dependencies
-for the project, configure it at development time, and then package it
-it in a way that others can install. Using Spack for CMake-based
-development consists of three parts:
-
-#. Setting up the CMake build in your software
-#. Writing the Spack Package
-#. Using it from Spack.
-
---------------------------
-Setting Up the CMake Build
---------------------------
-
-You should follow standard CMake conventions in setting up your
-software, your CMake build should NOT depend on or require Spack to
-build. See here for an example:
-
-https://github.com/citibeth/icebin
-
-Note that there's one exception here to the rule I mentioned above.
-In ``CMakeLists.txt``, I have the following line:
-
-.. code-block:: none
-
- include_directories($ENV{CMAKE_TRANSITIVE_INCLUDE_PATH})
-
-This is a hook into Spack, and it ensures that all transitive
-dependencies are included in the include path. It's not needed if
-everything is in one tree, but it is (sometimes) in the Spack world;
-when running without Spack, it has no effect.
-
-Note that this "feature" is controversial, could break with future
-versions of GNU ld, and probably not the best to use. The best
-practice is that you make sure that anything you #include is listed as
-a dependency in your CMakeLists.txt.
-
-To be more specific: if you #inlcude something from package A and an
-installed HEADER FILE in A #includes something from package B, then
-you should also list B as a dependency in your CMake build. If you
-depend on A but header files exported by A do NOT #include things from
-B, then you do NOT need to list B as a dependency --- even if linking
-to A links in libB.so as well.
-
-I also recommend that you set up your CMake build to use RPATHs
-correctly. Not only is this a good idea and nice, but it also ensures
-that your package will build the same with or without ``spack
-install``.
-
--------------------------
-Writing the Spack Package
--------------------------
-
-Now that you have a CMake build, you want to tell Spack how to
-configure it. This is done by writing a Spack package for your
-software. See here for example:
-
-https://github.com/citibeth/spack/blob/efischer/develop/var/spack/repos/builtin/packages/icebin/package.py
-
-You need to subclass ``CMakePackage``, as is done in this example.
-This enables advanced features of Spack for helping you in configuring
-your software (keep reading...). Instead of an ``install()`` method
-used when subclassing ``Package``, you write ``configure_args()``.
-See here for more info on how this works:
-
-https://github.com/LLNL/spack/pull/543/files
-
-NOTE: if your software is not publicly available, you do not need to
-set the URL or version. Or you can set up bogus URLs and
-versions... whatever causes Spack to not crash.
-
--------------------
-Using it from Spack
--------------------
-
-Now that you have a Spack package, you can get Spack to setup your
-CMake project for you. Use the following to setup, configure and
-build your project:
-
-.. code-block:: console
-
- $ cd myproject
- $ spack spconfig myproject@local
- $ mkdir build; cd build
- $ ../spconfig.py ..
- $ make
- $ make install
-
-Everything here should look pretty familiar here from a CMake
-perspective, except that ``spack spconfig`` creates the file
-``spconfig.py``, which calls CMake with arguments appropriate for your
-Spack configuration. Think of it as the equivalent to running a bunch
-of ``spack location -i`` commands. You will run ``spconfig.py``
-instead of running CMake directly.
-
-If your project is publicly available (eg on GitHub), then you can
-ALSO use this setup to "just install" a release version without going
-through the manual configuration/build step. Just do:
-
-#. Put tag(s) on the version(s) in your GitHub repo you want to be release versions.
-
-#. Set the ``url`` in your ``package.py`` to download a tarball for
- the appropriate version. (GitHub will give you a tarball for any
- version in the repo, if you tickle it the right way). For example:
-
- https://github.com/citibeth/icebin/tarball/v0.1.0
-
- Set up versions as appropriate in your ``package.py``. (Manually
- download the tarball and run ``md5sum`` to determine the
- appropriate checksum for it).
-
-#. Now you should be able to say ``spack install myproject@version``
- and things "just work."
-
-NOTE... in order to use the features outlined in this post, you
-currently need to use the following branch of Spack:
-
-https://github.com/citibeth/spack/tree/efischer/develop
-
-There is a pull request open on this branch (
-https://github.com/LLNL/spack/pull/543 ) and we are working to get it
-integrated into the main ``develop`` branch.
-
-------------------------
-Activating your Software
-------------------------
-
-Once you've built your software, you will want to load it up. You can
-use ``spack load mypackage@local`` for that in your ``.bashrc``, but
-that is slow. Try stuff like the following instead:
-
-The following command will load the Spack-installed packages needed
-for basic Python use of IceBin:
-
-.. code-block:: console
-
- $ module load `spack module find tcl icebin netcdf cmake@3.5.1`
- $ module load `spack module find --dependencies tcl py-basemap py-giss`
-
-
-You can speed up shell startup by turning these into ``module load`` commands.
-
-#. Cut-n-paste the script ``make_spackenv``:
-
- .. code-block:: sh
-
- #!/bin/sh
- #
- # Generate commands to load the Spack environment
-
- SPACKENV=$HOME/spackenv.sh
-
- spack module find --shell tcl git icebin@local ibmisc netcdf cmake@3.5.1 > $SPACKENV
- spack module find --dependencies --shell tcl py-basemap py-giss >> $SPACKENV
-
-#. Add the following to your ``.bashrc`` file:
-
- .. code-block:: sh
-
- source $HOME/spackenv.sh
- # Preferentially use your checked-out Python source
- export PYTHONPATH=$HOME/icebin/pylib:$PYTHONPATH
-
-#. Run ``sh make_spackenv`` whenever your Spack installation changes (including right now).
-
------------
-Giving Back
------------
-
-If your software is publicly available, you should submit the
-``package.py`` for it as a pull request to the main Spack GitHub
-project. This will ensure that anyone can install your software
-(almost) painlessly with a simple ``spack install`` command. See here
-for how that has turned into detailed instructions that have
-successfully enabled collaborators to install complex software:
-
-https://github.com/citibeth/icebin/blob/develop/README.rst
diff --git a/lib/spack/docs/configuration.rst b/lib/spack/docs/configuration.rst
index 6de823c845..f4d3a65653 100644
--- a/lib/spack/docs/configuration.rst
+++ b/lib/spack/docs/configuration.rst
@@ -132,6 +132,65 @@ The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
+.. _system-packages:
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+False Paths for System Packages
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Sometimes, the externally-installed package one wishes to use with
+Spack comes with the Operating System and is installed in a standard
+place --- ``/usr``, for example. Many other packages are there as
+well. If Spack adds it to build paths, then some packages might
+pick up dependencies from ``/usr`` than the intended Spack version.
+
+In order to avoid this problem, it is advisable to specify a fake path
+in ``packages.yaml``, thereby preventing Spack from adding the real
+path to compiler command lines. This will work becuase compilers
+normally search standard system paths, even if they are not on the
+command line. For example:
+
+.. code-block:: yaml
+
+ packages:
+ # Recommended for security reasons
+ # Do not install OpenSSL as non-root user.
+ openssl:
+ paths:
+ openssl@system: /false/path
+ version: [system]
+ buildable: False
+
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Extracting System Packages
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In some cases, using false paths for system packages will not work.
+Some builds need to run binaries out of their dependencies, not just
+access their libraries: the build needs to know the real location of
+the system package.
+
+In this case, one can create a Spack-like single-package tree by
+creating symlinks to the files related to just that package.
+Depending on the OS, it is possible to obtain a list of the files in a
+single OS-installed package. For example, on RedHat / Fedora:
+
+.. code-block:: console
+
+ $ repoquery --list openssl-devel
+ ...
+ /usr/lib/libcrypto.so
+ /usr/lib/libssl.so
+ /usr/lib/pkgconfig/libcrypto.pc
+ /usr/lib/pkgconfig/libssl.pc
+ /usr/lib/pkgconfig/openssl.pc
+ ...
+
+Spack currently does not provide an automated way to create a symlink
+tree to these files.
+
+
.. _concretization-preferences:
--------------------------
@@ -190,27 +249,3 @@ The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depend_on`` (e.g, mpi) and a list of rules for fulfilling that
dependency.
-
----------
-Profiling
----------
-
-Spack has some limited built-in support for profiling, and can report
-statistics using standard Python timing tools. To use this feature,
-supply ``-p`` to Spack on the command line, before any subcommands.
-
-.. _spack-p:
-
-^^^^^^^^^^^^^^^^^^^
-``spack --profile``
-^^^^^^^^^^^^^^^^^^^
-
-``spack --profile`` output looks like this:
-
-.. command-output:: spack --profile graph --deptype=nobuild dyninst
- :ellipsis: 25
-
-The bottom of the output shows the top most time consuming functions,
-slowest on top. The profiling support is from Python's built-in tool,
-`cProfile
-<https://docs.python.org/2/library/profile.html#module-cProfile>`_.
diff --git a/lib/spack/docs/developer_guide.rst b/lib/spack/docs/developer_guide.rst
index 04ae8fe1a1..27f03df57a 100644
--- a/lib/spack/docs/developer_guide.rst
+++ b/lib/spack/docs/developer_guide.rst
@@ -324,3 +324,27 @@ Developer commands
^^^^^^^^^^^^^^
``spack test``
^^^^^^^^^^^^^^
+
+---------
+Profiling
+---------
+
+Spack has some limited built-in support for profiling, and can report
+statistics using standard Python timing tools. To use this feature,
+supply ``-p`` to Spack on the command line, before any subcommands.
+
+.. _spack-p:
+
+^^^^^^^^^^^^^^^^^^^
+``spack --profile``
+^^^^^^^^^^^^^^^^^^^
+
+``spack --profile`` output looks like this:
+
+.. command-output:: spack --profile graph dyninst
+ :ellipsis: 25
+
+The bottom of the output shows the top most time consuming functions,
+slowest on top. The profiling support is from Python's built-in tool,
+`cProfile
+<https://docs.python.org/2/library/profile.html#module-cProfile>`_.
diff --git a/lib/spack/docs/getting_started.rst b/lib/spack/docs/getting_started.rst
index 676697a549..df057dbb9d 100644
--- a/lib/spack/docs/getting_started.rst
+++ b/lib/spack/docs/getting_started.rst
@@ -1,22 +1,46 @@
+.. _getting_started:
+
===============
Getting Started
===============
---------
-Download
---------
+-------------
+Prerequisites
+-------------
+
+Spack has the following minimum requirements, which must be installed
+before Spack is run:
+
+1. Python 2.6 or 2.7
+2. A C/C++ compiler
+3. The ``git`` and ``curl`` commands.
+
+These requirements can be easily installed on most modern Linux systems;
+on Macintosh, XCode is required. Spack is designed to run on HPC
+platforms like Cray and BlueGene/Q. Not all packages should be expected
+to work on all platforms. A build matrix showing which packages are
+working on which systems is planned but not yet available.
+
+------------
+Installation
+------------
-Getting spack is easy. You can clone it from the `github repository
+Getting Spack is easy. You can clone it from the `github repository
<https://github.com/llnl/spack>`_ using this command:
.. code-block:: console
$ git clone https://github.com/llnl/spack.git
-This will create a directory called ``spack``. We'll assume that the
-full path to this directory is in the ``SPACK_ROOT`` environment
-variable. Add ``$SPACK_ROOT/bin`` to your path and you're ready to
-go:
+This will create a directory called ``spack``.
+
+^^^^^^^^^^^^^^^^^^^^^^^^
+Add Spack to the Shell
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+We'll assume that the full path to your downloaded Spack directory is
+in the ``SPACK_ROOT`` environment variable. Add ``$SPACK_ROOT/bin``
+to your path and you're ready to go:
.. code-block:: console
@@ -38,14 +62,46 @@ For a richer experience, use Spack's `shell support
This automatically adds Spack to your ``PATH``.
-------------
-Installation
-------------
+^^^^^^^^^^^^^^^^^
+Clean Environment
+^^^^^^^^^^^^^^^^^
+
+Many packages' installs can be broken by changing environment
+variables. For example, a package might pick up the wrong build-time
+dependencies (most of them not specified) depending on the setting of
+``PATH``. ``GCC`` seems to be particularly vulnerable to these issues.
+
+Therefore, it is recommended that Spack users run with a *clean
+environment*, especially for ``PATH``. Only software that comes with
+the system, or that you know you wish to use with Spack, should be
+included. This procedure will avoid many strange build errors.
+
+
+^^^^^^^^^^^^^^^^^^
+Check Installation
+^^^^^^^^^^^^^^^^^^
+
+With Spack installed, you should be able to run some basic Spack
+commands. For example:
+
+.. code-block:: console
-You don't need to install Spack; it's ready to run as soon as you
-clone it from git.
+ $ spack spec netcdf
+ ...
+ netcdf@4.4.1%gcc@5.3.0~hdf4+mpi arch=linux-SuSE11-x86_64
+ ^curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^hdf5@1.10.0-patch1%gcc@5.3.0+cxx~debug+fortran+mpi+shared~szip~threadsafe arch=linux-SuSE11-x86_64
+ ^openmpi@1.10.1%gcc@5.3.0~mxm~pmi~psm~psm2~slurm~sqlite3~thread_multiple~tm+verbs+vt arch=linux-SuSE11-x86_64
+ ^m4@1.4.17%gcc@5.3.0+sigsegv arch=linux-SuSE11-x86_64
+ ^libsigsegv@2.10%gcc@5.3.0 arch=linux-SuSE11-x86_64
-You may want to run it out of a prefix other than the git repository
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Optional: Alternate Prefix
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You may want to run Spack out of a prefix other than the git repository
you cloned. The ``spack bootstrap`` command provides this
functionality. To install spack in a new directory, simply type:
@@ -57,3 +113,999 @@ This will install a new spack script in ``/my/favorite/prefix/bin``,
which you can use just like you would the regular spack script. Each
copy of spack installs packages into its own ``$PREFIX/opt``
directory.
+
+
+^^^^^^^^^^
+Next Steps
+^^^^^^^^^^
+
+In theory, Spack doesn't need any additional installation; just
+download and run! But in real life, additional steps are usually
+required before Spack can work in a practical sense. Read on...
+
+
+.. _compiler-config:
+
+----------------------
+Compiler configuration
+----------------------
+
+Spack has the ability to build packages with multiple compilers and
+compiler versions. Spack searches for compilers on your machine
+automatically the first time it is run. It does this by inspecting
+your ``PATH``.
+
+.. _spack-compilers:
+
+^^^^^^^^^^^^^^^^^^^
+``spack compilers``
+^^^^^^^^^^^^^^^^^^^
+
+You can see which compilers spack has found by running ``spack
+compilers`` or ``spack compiler list``:
+
+.. code-block:: console
+
+ $ spack compilers
+ ==> Available compilers
+ -- gcc ---------------------------------------------------------
+ gcc@4.9.0 gcc@4.8.0 gcc@4.7.0 gcc@4.6.2 gcc@4.4.7
+ gcc@4.8.2 gcc@4.7.1 gcc@4.6.3 gcc@4.6.1 gcc@4.1.2
+ -- intel -------------------------------------------------------
+ intel@15.0.0 intel@14.0.0 intel@13.0.0 intel@12.1.0 intel@10.0
+ intel@14.0.3 intel@13.1.1 intel@12.1.5 intel@12.0.4 intel@9.1
+ intel@14.0.2 intel@13.1.0 intel@12.1.3 intel@11.1
+ intel@14.0.1 intel@13.0.1 intel@12.1.2 intel@10.1
+ -- clang -------------------------------------------------------
+ clang@3.4 clang@3.3 clang@3.2 clang@3.1
+ -- pgi ---------------------------------------------------------
+ pgi@14.3-0 pgi@13.2-0 pgi@12.1-0 pgi@10.9-0 pgi@8.0-1
+ pgi@13.10-0 pgi@13.1-1 pgi@11.10-0 pgi@10.2-0 pgi@7.1-3
+ pgi@13.6-0 pgi@12.8-0 pgi@11.1-0 pgi@9.0-4 pgi@7.0-6
+
+Any of these compilers can be used to build Spack packages. More on
+how this is done is in :ref:`sec-specs`.
+
+.. _spack-compiler-add:
+
+^^^^^^^^^^^^^^^^^^^^^^
+``spack compiler add``
+^^^^^^^^^^^^^^^^^^^^^^
+
+An alias for ``spack compiler find``.
+
+.. _spack-compiler-find:
+
+^^^^^^^^^^^^^^^^^^^^^^^
+``spack compiler find``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+If you do not see a compiler in this list, but you want to use it with
+Spack, you can simply run ``spack compiler find`` with the path to
+where the compiler is installed. For example:
+
+.. code-block:: console
+
+ $ spack compiler find /usr/local/tools/ic-13.0.079
+ ==> Added 1 new compiler to /Users/gamblin2/.spack/compilers.yaml
+ intel@13.0.079
+
+Or you can run ``spack compiler find`` with no arguments to force
+auto-detection. This is useful if you do not know where compilers are
+installed, but you know that new compilers have been added to your
+``PATH``. For example, you might load a module, like this:
+
+.. code-block:: console
+
+ $ module load gcc-4.9.0
+ $ spack compiler find
+ ==> Added 1 new compiler to /Users/gamblin2/.spack/compilers.yaml
+ gcc@4.9.0
+
+This loads the environment module for gcc-4.9.0 to add it to
+``PATH``, and then it adds the compiler to Spack.
+
+.. _spack-compiler-info:
+
+^^^^^^^^^^^^^^^^^^^^^^^
+``spack compiler info``
+^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want to see specifics on a particular compiler, you can run
+``spack compiler info`` on it:
+
+.. code-block:: console
+
+ $ spack compiler info intel@15
+ intel@15.0.0:
+ cc = /usr/local/bin/icc-15.0.090
+ cxx = /usr/local/bin/icpc-15.0.090
+ f77 = /usr/local/bin/ifort-15.0.090
+ fc = /usr/local/bin/ifort-15.0.090
+ modules = []
+ operating system = centos6
+
+This shows which C, C++, and Fortran compilers were detected by Spack.
+Notice also that we didn't have to be too specific about the
+version. We just said ``intel@15``, and information about the only
+matching Intel compiler was displayed.
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Manual compiler configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If auto-detection fails, you can manually configure a compiler by
+editing your ``~/.spack/compilers.yaml`` file. You can do this by running
+``spack config edit compilers``, which will open the file in your ``$EDITOR``.
+
+Each compiler configuration in the file looks like this:
+
+.. code-block:: yaml
+
+ compilers:
+ - compiler:
+ modules = []
+ operating_system: centos6
+ paths:
+ cc: /usr/local/bin/icc-15.0.024-beta
+ cxx: /usr/local/bin/icpc-15.0.024-beta
+ f77: /usr/local/bin/ifort-15.0.024-beta
+ fc: /usr/local/bin/ifort-15.0.024-beta
+ spec: intel@15.0.0:
+
+For compilers, like ``clang``, that do not support Fortran, put
+``None`` for ``f77`` and ``fc``:
+
+.. code-block:: yaml
+
+ paths:
+ cc: /usr/bin/clang
+ cxx: /usr/bin/clang++
+ f77: None
+ fc: None
+ spec: clang@3.3svn:
+
+Once you save the file, the configured compilers will show up in the
+list displayed by ``spack compilers``.
+
+You can also add compiler flags to manually configured compilers. The
+valid flags are ``cflags``, ``cxxflags``, ``fflags``, ``cppflags``,
+``ldflags``, and ``ldlibs``. For example:
+
+.. code-block:: yaml
+
+ compilers:
+ - compiler:
+ modules = []
+ operating_system: OS
+ paths:
+ cc: /usr/local/bin/icc-15.0.024-beta
+ cxx: /usr/local/bin/icpc-15.0.024-beta
+ f77: /usr/local/bin/ifort-15.0.024-beta
+ fc: /usr/local/bin/ifort-15.0.024-beta
+ parameters:
+ cppflags: -O3 -fPIC
+ spec: intel@15.0.0:
+
+These flags will be treated by spack as if they were entered from
+the command line each time this compiler is used. The compiler wrappers
+then inject those flags into the compiler command. Compiler flags
+entered from the command line will be discussed in more detail in the
+following section.
+
+^^^^^^^^^^^^^^^^^^^^^^^
+Build Your Own Compiler
+^^^^^^^^^^^^^^^^^^^^^^^
+
+If you are particular about which compiler/version you use, you might
+wish to have Spack build it for you. For example:
+
+.. code-block:: console
+
+ $ spack install gcc@4.9.3
+
+Once that has finished, you will need to add it to your
+``compilers.yaml`` file. You can then set Spack to use it by default
+by adding the following to your ``packages.yaml`` file:
+
+.. code-block:: yaml
+
+ packages:
+ all:
+ compiler: [gcc@4.9.3]
+
+
+.. note::
+
+ If you are building your own compiler, some users prefer to have a
+ Spack instance just for that. For example, create a new Spack in
+ ``~/spack-tools`` and then run ``~/spack-tools/bin/spack install
+ gcc@4.9.3``. Once the compiler is built, don't build anything
+ more in that Spack instance; instead, create a new "real" Spack
+ instance, configure Spack to use the compiler you've just built,
+ and then build your application software in the new Spack
+ instance. Following this tip makes it easy to delete all your
+ Spack packages *except* the compiler.
+
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Compilers Requiring Modules
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Many installed compilers will work regardless of the environment they
+are called with. However, some installed compilers require
+``$LD_LIBRARY_PATH`` or other environment variables to be set in order
+to run; this is typical for Intel and other proprietary compilers.
+
+In such a case, you should tell Spack which module(s) to load in order
+to run the chosen compiler (If the compiler does not come with a
+module file, you might consider making one by hand). Spack will load
+this module into the environment ONLY when the compiler is run, and
+NOT in general for a package's ``install()`` method. See, for
+example, this ``compilers.yaml`` file:
+
+.. code-block:: yaml
+
+ compilers:
+ - compiler:
+ modules: [other/comp/gcc-5.3-sp3]
+ operating_system: SuSE11
+ paths:
+ cc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gcc
+ cxx: /usr/local/other/SLES11.3/gcc/5.3.0/bin/g++
+ f77: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran
+ fc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran
+ spec: gcc@5.3.0
+
+Some compilers require special environment settings to be loaded not just
+to run, but also to execute the code they build, breaking packages that
+need to execute code they just compiled. If it's not possible or
+practical to use a better compiler, you'll need to ensure that
+environment settings are preserved for compilers like this (i.e., you'll
+need to load the module or source the compiler's shell script).
+
+By default, Spack tries to ensure that builds are reproducible by
+cleaning the environment before building. If this interferes with your
+compiler settings, you CAN use ``spack install --dirty`` as a workaround.
+Note that this MAY interfere with package builds.
+
+.. _licensed-compilers:
+
+^^^^^^^^^^^^^^^^^^
+Licensed Compilers
+^^^^^^^^^^^^^^^^^^
+
+Some proprietary compilers require licensing to use. If you need to
+use a licensed compiler (eg, PGI), the process is similar to a mix of
+build your own, plus modules:
+
+#. Create a Spack package (if it doesn't exist already) to install
+ your compiler. Follow instructions on installing :ref:`license`.
+
+#. Once the compiler is installed, you should be able to test it by
+ using Spack to load the module it just created, and running simple
+ builds (eg: ``cc helloWorld.c; ./a.out``)
+
+#. Add the newly-installed compiler to ``compilers.yaml`` as shown
+ above.
+
+^^^^^^^^^^^^^^^^
+Mixed Toolchains
+^^^^^^^^^^^^^^^^
+
+Modern compilers typically come with related compilers for C, C++ and
+Fortran bundled together. When possible, results are best if the same
+compiler is used for all languages.
+
+In some cases, this is not possible. For example, starting with macOS El
+Capitan (10.11), many packages no longer build with GCC, but XCode
+provides no Fortran compilers. The user is therefore forced to use a
+mixed toolchain: XCode-provided Clang for C/C++ and GNU ``gfortran`` for
+Fortran.
+
+In the simplest case, you can just edit ``compilers.yaml``:
+
+ .. code-block:: yaml
+
+ compilers:
+ darwin-x86_64:
+ clang@7.3.0-apple:
+ cc: /usr/bin/clang
+ cxx: /usr/bin/clang++
+ f77: /path/to/bin/gfortran
+ fc: /path/to/bin/gfortran
+
+.. note::
+
+ If you are building packages that are sensitive to the compiler's
+ name, you may also need to slightly modify a few more files so that
+ Spack uses compiler names the build system will recognize.
+
+ Following are instructions on how to hack together
+ ``clang`` and ``gfortran`` on Macintosh OS X. A similar approach
+ should work for other mixed toolchain needs.
+
+ Better support for mixed compiler toolchains is planned in forthcoming
+ Spack versions.
+
+ #. Create a symlink inside ``clang`` environment:
+
+ .. code-block:: console
+
+ $ cd $SPACK_ROOT/lib/spack/env/clang
+ $ ln -s ../cc gfortran
+
+
+ #. Patch ``clang`` compiler file:
+
+ .. code-block:: diff
+
+ $ diff --git a/lib/spack/spack/compilers/clang.py b/lib/spack/spack/compilers/clang.py
+ index e406d86..cf8fd01 100644
+ --- a/lib/spack/spack/compilers/clang.py
+ +++ b/lib/spack/spack/compilers/clang.py
+ @@ -35,17 +35,17 @@ class Clang(Compiler):
+ cxx_names = ['clang++']
+
+ # Subclasses use possible names of Fortran 77 compiler
+ - f77_names = []
+ + f77_names = ['gfortran']
+
+ # Subclasses use possible names of Fortran 90 compiler
+ - fc_names = []
+ + fc_names = ['gfortran']
+
+ # Named wrapper links within spack.build_env_path
+ link_paths = { 'cc' : 'clang/clang',
+ 'cxx' : 'clang/clang++',
+ # Use default wrappers for fortran, in case provided in compilers.yaml
+ - 'f77' : 'f77',
+ - 'fc' : 'f90' }
+ + 'f77' : 'clang/gfortran',
+ + 'fc' : 'clang/gfortran' }
+
+ @classmethod
+ def default_version(self, comp):
+
+^^^^^^^^^^^^^^^^^^^^^
+Compiler Verification
+^^^^^^^^^^^^^^^^^^^^^
+
+You can verify that your compilers are configured properly by installing a
+simple package. For example:
+
+.. code-block:: console
+
+ $ spack install zlib%gcc@5.3.0
+
+--------------------------------------
+Vendor-Specific Compiler Configuration
+--------------------------------------
+
+With Spack, things usually "just work" with GCC. Not so for other
+compilers. This section provides details on how to get specific
+compilers working.
+
+^^^^^^^^^^^^^^^
+Intel Compilers
+^^^^^^^^^^^^^^^
+
+Intel compilers are unusual because a single Intel compiler version
+can emulate multiple GCC versions. In order to provide this
+functionality, the Intel compiler needs GCC to be installed.
+Therefore, the following steps are necessary to successfully use Intel
+compilers:
+
+#. Install a version of GCC that implements the desired language
+ features (``spack install gcc``).
+
+#. Tell the Intel compiler how to find that desired GCC. This may be
+ done in one of two ways: (text taken from `Intel Reference Guide
+ <https://software.intel.com/en-us/node/522750>`_):
+
+ > By default, the compiler determines which version of ``gcc`` or ``g++``
+ > you have installed from the ``PATH`` environment variable.
+ >
+ > If you want use a version of ``gcc`` or ``g++`` other than the default
+ > version on your system, you need to use either the ``-gcc-name``
+ > or ``-gxx-name`` compiler option to specify the path to the version of
+ > ``gcc`` or ``g++`` that you want to use.
+
+Intel compilers may therefore be configured in one of two ways with
+Spack: using modules, or using compiler flags.
+
+""""""""""""""""""""""""""
+Configuration with Modules
+""""""""""""""""""""""""""
+
+One can control which GCC is seen by the Intel compiler with modules.
+A module must be loaded both for the Intel Compiler (so it will run)
+and GCC (so the compiler can find the intended GCC). The following
+configuration in ``compilers.yaml`` illustrates this technique:
+
+.. code-block:: yaml
+
+ compilers:
+ - compiler:
+ modules = [gcc-4.9.3, intel-15.0.24]
+ operating_system: centos7
+ paths:
+ cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
+ cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
+ f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
+ fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
+ spec: intel@15.0.24.4.9.3
+
+
+.. note::
+
+ The version number on the Intel compiler is a combination of
+ the "native" Intel version number and the GNU compiler it is
+ targeting.
+
+""""""""""""""""""""""""""
+Command Line Configuration
+""""""""""""""""""""""""""
+
+. warning::
+
+ As of the writing of this manual, added compilers flags are broken;
+ see `GitHub Issue <https://github.com/LLNL/spack/pull/1532>`_.
+
+One can also control which GCC is seen by the Intel compiler by adding
+flags to the ``icc`` command:
+
+#. Identify the location of the compiler you just installed:
+
+ .. code-block:: console
+
+ $ spack location -i gcc
+ /home2/rpfische/spack2/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw...
+
+#. Set up ``compilers.yaml``, for example:
+
+ .. code-block:: yaml
+
+ compilers:
+ - compiler:
+ modules = [intel-15.0.24]
+ operating_system: centos7
+ paths:
+ cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
+ cflags: -gcc-name /home2/rpfische/spack2/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
+ cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
+ cxxflags: -gxx-name /home2/rpfische/spack2/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/g++
+ f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
+ fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
+ fflags: -gcc-name /home2/rpfische/spack2/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
+ spec: intel@15.0.24.4.9.3
+
+
+^^^
+PGI
+^^^
+
+PGI comes with two sets of compilers for C++ and Fortran,
+distinguishable by their names. "Old" compilers:
+
+.. code-block:: yaml
+
+ cc: /soft/pgi/15.10/linux86-64/15.10/bin/pgcc
+ cxx: /soft/pgi/15.10/linux86-64/15.10/bin/pgCC
+ f77: /soft/pgi/15.10/linux86-64/15.10/bin/pgf77
+ fc: /soft/pgi/15.10/linux86-64/15.10/bin/pgf90
+
+"New" compilers:
+
+.. code-block:: yaml
+
+ cc: /soft/pgi/15.10/linux86-64/15.10/bin/pgcc
+ cxx: /soft/pgi/15.10/linux86-64/15.10/bin/pgc++
+ f77: /soft/pgi/15.10/linux86-64/15.10/bin/pgfortran
+ fc: /soft/pgi/15.10/linux86-64/15.10/bin/pgfortran
+
+Older installations of PGI contains just the old compilers; whereas
+newer installations contain the old and the new. The new compiler is
+considered preferable, as some packages
+(``hdf4``) will not build with the old compiler.
+
+When auto-detecting a PGI compiler, there are cases where Spack will
+find the old compilers, when you really want it to find the new
+compilers. It is best to check this ``compilers.yaml``; and if the old
+compilers are being used, change ``pgf77`` and ``pgf90`` to
+``pgfortran``.
+
+Other issues:
+
+* There are reports that some packages will not build with PGI,
+ including ``libpciaccess`` and ``openssl``. A workaround is to
+ build these packages with another compiler and then use them as
+ dependencies for PGI-build packages. For example:
+
+ .. code-block:: console
+
+ $ spack install openmpi%pgi ^libpciaccess%gcc
+
+
+* PGI requires a license to use; see :ref:`licensed-compilers` for more
+ information on installation.
+
+.. note::
+
+ It is believed the problem with ``hdf4`` is that everything is
+ compiled with the ``F77`` compiler, but at some point some Fortran
+ 90 code slipped in there. So compilers that can handle both FORTRAN
+ 77 and Fortran 90 (``gfortran``, ``pgfortran``, etc) are fine. But
+ compilers specific to one or the other (``pgf77``, ``pgf90``) won't
+ work.
+
+
+^^^
+NAG
+^^^
+
+At this point, the NAG compiler is `known to not
+work<https://github.com/LLNL/spack/issues/590>`.
+
+
+---------------
+System Packages
+---------------
+
+Once compilers are configured, one needs to determine which
+pre-installed system packages, if any, to use in builds. This is
+configured in the file `~/.spack/packages.yaml`. For example, to use
+an OpenMPI installed in /opt/local, one would use:
+
+.. code-block:: yaml
+
+ packages:
+ openmpi:
+ paths:
+ openmpi@1.10.1: /opt/local
+ buildable: False
+
+In general, Spack is easier to use and more reliable if it builds all of
+its own dependencies. However, there are two packages for which one
+commonly needs to use system versions:
+
+^^^
+MPI
+^^^
+
+On supercomputers, sysadmins have already built MPI versions that take
+into account the specifics of that computer's hardware. Unless you
+know how they were built and can choose the correct Spack variants,
+you are unlikely to get a working MPI from Spack. Instead, use an
+appropriate pre-installed MPI.
+
+If you choose a pre-installed MPI, you should consider using the
+pre-installed compiler used to build that MPI; see above on
+``compilers.yaml``.
+
+^^^^^^^
+OpenSSL
+^^^^^^^
+
+The ``openssl`` package underlies much of modern security in a modern
+OS; an attacker can easily "pwn" any computer on which they can modify SSL.
+Therefore, any ``openssl`` used on a system should be created in a
+"trusted environment" --- for example, that of the OS vendor.
+
+OpenSSL is also updated by the OS vendor from time to time, in
+response to security problems discovered in the wider community. It
+is in everyone's best interest to use any newly updated versions as
+soon as they come out. Modern Linux installations have standard
+procedures for security updates without user involvement.
+
+Spack running at user-level is not a trusted environment, nor do Spack
+users generally keep up-to-date on the latest security holes in SSL. For
+these reasons, a Spack-installed OpenSSL should likely not be trusted.
+
+As long as the system-provided SSL works, you can use it instead. One
+can check if it works by trying to download an ``https://``. For
+example:
+
+.. code-block:: console
+
+ $ curl -O https://github.com/ImageMagick/ImageMagick/archive/7.0.2-7.tar.gz
+
+The recommended way to tell Spack to use the system-supplied OpenSSL is
+to add the following to ``packages.yaml``. Note that the ``@system``
+"version" means "I don't care what version it is, just use what is
+there." This is reasonable for OpenSSL, which has a stable API.
+
+
+.. code-block:: yaml
+
+ packages:
+ openssl:
+ paths:
+ openssl@system: /false/path
+ version: [system]
+ buildable: False
+
+.. note::
+
+ Even though OpenSSL is located in ``/usr``, We have told Spack to
+ look for it in ``/false/path``. This prevents ``/usr`` from being
+ added to compilation paths and RPATHs, where it could cause
+ unrelated system libraries to be used instead of their Spack
+ equivalents.
+
+ The adding of ``/usr`` to ``RPATH`` in this sitution is a known issue
+ and will be fixed in a future release.
+
+
+^^^
+Git
+^^^
+
+Some Spack packages use ``git`` to download, which might not work on
+some computers. For example, the following error was
+encountered on a Macintosn during ``spack install julia-master``:
+
+.. code-block:: console
+
+ ==> Trying to clone git repository:
+ https://github.com/JuliaLang/julia.git
+ on branch master
+ Cloning into 'julia'...
+ fatal: unable to access 'https://github.com/JuliaLang/julia.git/':
+ SSL certificate problem: unable to get local issuer certificate
+
+This problem is related to OpenSSL, and in some cases might be solved
+by installing a new version of ``git`` and ``openssl``:
+
+#. Run ``spack install git``
+#. Add the output of ``spack module loads git`` to your ``.bahsrc``.
+
+If this doesn't work, it is also possible to disable checking of SSL
+certificates by using either of:
+
+.. code-block:: console
+
+ $ spack -k install
+ $ spack --insecure install
+
+Using ``-k/--insecure`` makes Spack disable SSL checking when fetching
+from websites and from git.
+
+.. warning::
+
+ This workaround should be used ONLY as a last resort! Wihout SSL
+ certificate verification, spack and git will download from sites you
+ wouldn't normally trust. The code you download and run may then be
+ compromised! While this is not a major issue for archives that will
+ be checksummed, it is especially problematic when downloading from
+ name Git branches or tags, which relies entirely on trusting a
+ certificate for security (no verification).
+
+-----------------------
+Utilities Configuration
+-----------------------
+
+Although Spack does not need installation *per se*, it does rely on
+other packages to be available on its host system. If those packages
+are out of date or missing, then Spack will not work. Sometimes, an
+appeal to the system's package manager can fix such problems. If not,
+the solution is have Spack install the required packages, and then
+have Spack use them.
+
+For example, if `curl` doesn't work, one could use the following steps
+to provide Spack a working `curl`:
+
+.. code-block:: console
+
+ $ spack install curl
+ $ spack load curl
+
+or alternately:
+
+.. code-block:: console
+
+ $ spack module loads curl >>~/.bashrc
+
+or if environment modules don't work:
+
+.. code-block:: console
+
+ $ export PATH=`spack location -i curl`/bin:$PATH
+
+
+External commands are used by Spack in two places: within core Spack,
+and in the package recipes. The bootstrapping procedure for these two
+cases is somewhat different, and is treated separately below.
+
+^^^^^^^^^^^^^^^^^^^^
+Core Spack Utilities
+^^^^^^^^^^^^^^^^^^^^
+
+Core Spack uses the following packages, mainly to download and unpack
+source code, and to load generated environment modules: ``curl``,
+``env``, ``git``, ``go``, ``hg``, ``svn``, ``tar``, ``unzip``,
+``patch``, ``environment-modules``.
+
+As long as the user's environment is set up to successfully run these
+programs from outside of Spack, they should work inside of Spack as
+well. They can generally be activated as in the `curl` example above;
+or some systems might already have an appropriate hand-built
+environment module that may be loaded. Either way works.
+
+A few notes on specific programs in this list:
+
+""""""""""""""""""""""""""
+cURL, git, Mercurial, etc.
+""""""""""""""""""""""""""
+
+Spack depends on cURL to download tarballs, the format that most
+Spack-installed packages come in. Your system's cURL should always be
+able to download unencrypted ``http://``. However, the cURL on some
+systems has problems with SSL-enabled ``https://`` URLs, due to
+outdated / insecure versions of OpenSSL on those systems. This will
+prevent Spack from installing any software requiring ``https://``
+until a new cURL has been installed, using the technique above.
+
+.. warning::
+
+ remember that if you install ``curl`` via Spack that it may rely on a
+ user-space OpenSSL that is not upgraded regularly. It may fall out of
+ date faster than your system OpenSSL.
+
+Some packages use source code control systems as their download method:
+``git``, ``hg``, ``svn`` and occasionally ``go``. If you had to install
+a new ``curl``, then chances are the system-supplied version of these
+other programs will also not work, because they also rely on OpenSSL.
+Once ``curl`` has been installed, you can similarly install the others.
+
+
+.. _InstallEnvironmentModules:
+
+"""""""""""""""""""
+Environment Modules
+"""""""""""""""""""
+
+In order to use Spack's generated environment modules, you must have
+installed one of *Environment Modules* or *Lmod*. On many Linux
+distributions, this can be installed from the vendor's repository. For
+example: ``yum install environment-modules`` (Fedora/RHEL/CentOS). If
+your Linux distribution does not have Environment Modules, you can get it
+with Spack:
+
+#. Consider using system tcl (as long as your system has Tcl version 8.0 or later):
+ #) Identify its location using ``which tclsh``
+ #) Identify its version using ``echo 'puts $tcl_version;exit 0' | tclsh``
+ #) Add to ``~/.spack/packages.yaml`` and modify as appropriate:
+
+ .. code-block:: yaml
+
+ packages:
+ tcl:
+ paths:
+ tcl@8.5: /usr
+ version: [8.5]
+ buildable: False
+
+#. Install with:
+
+ .. code-block:: console
+
+ $ spack install environment-modules
+
+#. Activate with the following script (or apply the updates to your
+ ``.bashrc`` file manually):
+
+ .. code-block:: sh
+
+ TMP=`tempfile`
+ echo >$TMP
+ MODULE_HOME=`spack location -i environment-modules`
+ MODULE_VERSION=`ls -1 $MODULE_HOME/Modules | head -1`
+ ${MODULE_HOME}/Modules/${MODULE_VERSION}/bin/add.modules <$TMP
+ cp .bashrc $TMP
+ echo "MODULE_VERSION=${MODULE_VERSION}" > .bashrc
+ cat $TMP >>.bashrc
+
+This adds to your ``.bashrc`` (or similar) files, enabling Environment
+Modules when you log in. Re-load your .bashrc (or log out and in
+again), and then test that the ``module`` command is found with:
+
+.. code-block:: console
+
+ $ module avail
+
+
+^^^^^^^^^^^^^^^^^
+Package Utilities
+^^^^^^^^^^^^^^^^^
+
+Spack may also encounter bootstrapping problems inside a package's
+``install()`` method. In this case, Spack will normally be running
+inside a *sanitized build environment*. This includes all of the
+package's dependencies, but none of the environment Spack inherited
+from the user: if you load a module or modify ``$PATH`` before
+launching Spack, it will have no effect.
+
+In this case, you will likely need to use the ``--dirty`` flag when
+running ``spack install``, causing Spack to **not** sanitize the build
+environment. You are now responsible for making sure that environment
+does not do strange things to Spack or its installs.
+
+Another way to get Spack to use its own version of something is to add
+that something to a package that needs it. For example:
+
+.. code-block:: python
+
+ depends_on('binutils', type='build')
+
+This is considered best practice for some common build dependencies,
+such as ``autotools`` (if the ``autoreconf`` command is needed) and
+``cmake`` --- ``cmake`` especially, because different packages require
+a different version of CMake.
+
+""""""""
+binutils
+""""""""
+
+.. https://groups.google.com/forum/#!topic/spack/i_7l_kEEveI
+
+Sometimes, strange error messages can happen while building a package.
+For example, ``ld`` might crash. Or one receives a message like:
+
+.. code-block:: console
+
+ ld: final link failed: Nonrepresentable section on output
+
+
+or:
+
+.. code-block:: console
+
+ ld: .../_fftpackmodule.o: unrecognized relocation (0x2a) in section `.text'
+
+These problems are often caused by an outdated ``binutils`` on your
+system. Unlike CMake or Autotools, adding ``depends_on('binutils')`` to
+every package is not considered a best practice because every package
+written in C/C++/Fortran would need it. A potential workaround is to
+load a recent ``binutils`` into your environment and use the ``--dirty``
+flag.
+
+
+.. _cray-support:
+
+-------------
+Spack on Cray
+-------------
+
+Spack differs slightly when used on a Cray system. The architecture spec
+can differentiate between the front-end and back-end processor and operating system.
+For example, on Edison at NERSC, the back-end target processor
+is "Ivy Bridge", so you can specify to use the back-end this way:
+
+.. code-block:: console
+
+ $ spack install zlib target=ivybridge
+
+You can also use the operating system to build against the back-end:
+
+.. code-block:: console
+
+ $ spack install zlib os=CNL10
+
+Notice that the name includes both the operating system name and the major
+version number concatenated together.
+
+Alternatively, if you want to build something for the front-end,
+you can specify the front-end target processor. The processor for a login node
+on Edison is "Sandy bridge" so we specify on the command line like so:
+
+.. code-block:: console
+
+ $ spack install zlib target=sandybridge
+
+And the front-end operating system is:
+
+.. code-block:: console
+
+ $ spack install zlib os=SuSE11
+
+^^^^^^^^^^^^^^^^^^^^^^^
+Cray compiler detection
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Spack can detect compilers using two methods. For the front-end, we treat
+everything the same. The difference lies in back-end compiler detection.
+Back-end compiler detection is made via the Tcl module avail command.
+Once it detects the compiler it writes the appropriate PrgEnv and compiler
+module name to compilers.yaml and sets the paths to each compiler with Cray\'s
+compiler wrapper names (i.e. cc, CC, ftn). During build time, Spack will load
+the correct PrgEnv and compiler module and will call appropriate wrapper.
+
+The compilers.yaml config file will also differ. There is a
+modules section that is filled with the compiler's Programming Environment
+and module name. On other systems, this field is empty []:
+
+.. code-block:: yaml
+
+ - compiler:
+ modules:
+ - PrgEnv-intel
+ - intel/15.0.109
+
+As mentioned earlier, the compiler paths will look different on a Cray system.
+Since most compilers are invoked using cc, CC and ftn, the paths for each
+compiler are replaced with their respective Cray compiler wrapper names:
+
+.. code-block:: yaml
+
+ paths:
+ cc: cc
+ cxx: CC
+ f77: ftn
+ fc: ftn
+
+As opposed to an explicit path to the compiler executable. This allows Spack
+to call the Cray compiler wrappers during build time.
+
+For more on compiler configuration, check out :ref:`compiler-config`.
+
+Spack sets the default Cray link type to dynamic, to better match other
+other platforms. Individual packages can enable static linking (which is the
+default outside of Spack on cray systems) using the ``-static`` flag.
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Setting defaults and using Cray modules
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want to use default compilers for each PrgEnv and also be able
+to load cray external modules, you will need to set up a ``packages.yaml``.
+
+Here's an example of an external configuration for cray modules:
+
+.. code-block:: yaml
+
+ packages:
+ mpi:
+ modules:
+ mpich@7.3.1%gcc@5.2.0 arch=cray_xc-haswell-CNL10: cray-mpich
+ mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-haswell-CNL10: cray-mpich
+
+This tells Spack that for whatever package that depends on mpi, load the
+cray-mpich module into the environment. You can then be able to use whatever
+environment variables, libraries, etc, that are brought into the environment
+via module load.
+
+You can set the default compiler that Spack can use for each compiler type.
+If you want to use the Cray defaults, then set them under ``all:`` in packages.yaml.
+In the compiler field, set the compiler specs in your order of preference.
+Whenever you build with that compiler type, Spack will concretize to that version.
+
+Here is an example of a full packages.yaml used at NERSC
+
+.. code-block:: yaml
+
+ packages:
+ mpi:
+ modules:
+ mpich@7.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-mpich
+ mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-SuSE11-ivybridge: cray-mpich
+ buildable: False
+ netcdf:
+ modules:
+ netcdf@4.3.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-netcdf
+ netcdf@4.3.3.1%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge: cray-netcdf
+ buildable: False
+ hdf5:
+ modules:
+ hdf5@1.8.14%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-hdf5
+ hdf5@1.8.14%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge: cray-hdf5
+ buildable: False
+ all:
+ compiler: [gcc@5.2.0, intel@16.0.0.109]
+
+Here we tell spack that whenever we want to build with gcc use version 5.2.0 or
+if we want to build with intel compilers, use version 16.0.0.109. We add a spec
+for each compiler type for each cray modules. This ensures that for each
+compiler on our system we can use that external module.
+
+For more on external packages check out the section :ref:`sec-external_packages`.
diff --git a/lib/spack/docs/index.rst b/lib/spack/docs/index.rst
index 45efcf131f..7203ed7eb7 100644
--- a/lib/spack/docs/index.rst
+++ b/lib/spack/docs/index.rst
@@ -37,25 +37,34 @@ package:
If you're new to spack and want to start using it, see :doc:`getting_started`,
or refer to the full manual below.
------------------
-Table of Contents
------------------
.. toctree::
:maxdepth: 2
+ :caption: Tutorials
features
getting_started
basic_usage
- packaging_guide
- mirrors
+ workflows
+
+.. toctree::
+ :maxdepth: 2
+ :caption: Reference Manual
+
configuration
- developer_guide
- case_studies
- command_index
+ mirrors
package_list
+ command_index
+
+.. toctree::
+ :maxdepth: 2
+ :caption: Contributing to Spack
+
+ packaging_guide
+ developer_guide
API Docs <spack>
+
==================
Indices and tables
==================
diff --git a/lib/spack/docs/packaging_guide.rst b/lib/spack/docs/packaging_guide.rst
index 70cd58f6c1..6efb0078f6 100644
--- a/lib/spack/docs/packaging_guide.rst
+++ b/lib/spack/docs/packaging_guide.rst
@@ -373,6 +373,107 @@ some examples:
In general, you won't have to remember this naming convention because
:ref:`spack-create` and :ref:`spack-edit` handle the details for you.
+-----------------
+Trusted Downloads
+-----------------
+
+Spack verifies that the source code it downloads is not corrupted or
+compromised; or at least, that it is the same version the author of
+the Spack package saw when the package was created. If Spack uses a
+download method it can verify, we say the download method is
+*trusted*. Trust is important for *all downloads*: Spack
+has no control over the security of the various sites from which it
+downloads source code, and can never assume that any particular site
+hasn't been compromised.
+
+Trust is established in different ways for different download methods.
+For the most common download method --- a single-file tarball --- the
+tarball is checksummed. Git downloads using ``commit=`` are trusted
+implicitly, as long as a hash is specified.
+
+Spack also provides untrusted download methods: tarball URLs may be
+supplied without a checksum, or Git downloads may specify a branch or
+tag instead of a hash. If the user does not control or trust the
+source of an untrusted download, it is a security risk. Unless otherwise
+specified by the user for special cases, Spack should by default use
+*only* trusted download methods.
+
+Unfortunately, Spack does not currently provide that guarantee. It
+does provide the following mechanisms for safety:
+
+#. By default, Spack will only install a tarball package if it has a
+ checksum and that checksum matches. You can override this with
+ ``spack install --no-checksum``.
+
+#. Numeric versions are almost always tarball downloads, whereas
+ non-numeric versions not named ``develop`` frequently download
+ untrusted branches or tags from a version control system. As long
+ as a package has at least one numeric version, and no non-numeric
+ version named ``develop``, Spack will prefer it over any
+ non-numeric versions.
+
+^^^^^^^^^
+Checksums
+^^^^^^^^^
+
+For tarball downloads, Spack can currently support checksums using the
+MD5, SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 algorithms. It
+determines the algorithm to use based on the hash length.
+
+-----------------------
+Package Version Numbers
+-----------------------
+
+Most Spack versions are numeric, a tuple of integers; for example,
+``apex@0.1``, ``ferret@6.96`` or ``py-netcdf@1.2.3.1``. Spack knows
+how to compare and sort numeric versions.
+
+Some Spack versions involve slight extensions of numeric syntax; for
+example, ``py-sphinx-rtd-theme@0.1.10a0``. In this case, numbers are
+always considered to be "newer" than letters. This is for consistency
+with `RPM <https://bugzilla.redhat.com/show_bug.cgi?id=50977>`.
+
+Spack versions may also be arbitrary non-numeric strings; any string
+here will suffice; for example, ``@develop``, ``@master``, ``@local``.
+The following rules determine the sort order of numeric
+vs. non-numeric versions:
+
+#. The non-numeric versions ``@develop`` is considered greatest (newest).
+
+#. Numeric versions are all less than ``@develop`` version, and are
+ sorted numerically.
+
+#. All other non-numeric versions are less than numeric versions, and
+ are sorted alphabetically.
+
+The logic behind this sort order is two-fold:
+
+#. Non-numeric versions are usually used for special cases while
+ developing or debugging a piece of software. Keeping most of them
+ less than numeric versions ensures that Spack choose numeric
+ versions by default whenever possible.
+
+#. The most-recent development version of a package will usually be
+ newer than any released numeric versions. This allows the
+ ``develop`` version to satisfy dependencies like ``depends_on(abc,
+ when="@x.y.z:")``
+
+
+^^^^^^^^^^^^^
+Date Versions
+^^^^^^^^^^^^^
+
+If you wish to use dates as versions, it is best to use the format
+``@date-yyyy-mm-dd``. This will ensure they sort in the correct
+order. If you want your date versions to be numeric (assuming they
+don't conflict with other numeric versions), you can use just
+``yyyy.mm.dd``.
+
+Alternately, you might use a hybrid release-version / date scheme.
+For example, ``@1.3.2016.08.31`` would mean the version from the
+``1.3`` branch, as of August 31, 2016.
+
+
-------------------
Adding new versions
-------------------
@@ -459,19 +560,6 @@ it executable, then runs it with some arguments.
installer = Executable(self.stage.archive_file)
installer('--prefix=%s' % prefix, 'arg1', 'arg2', 'etc.')
-^^^^^^^^^
-Checksums
-^^^^^^^^^
-
-Spack uses a checksum to ensure that the downloaded package version is
-not corrupted or compromised. This is especially important when
-fetching from insecure sources, like unencrypted http. By default, a
-package will *not* be installed if it doesn't pass a checksum test
-(though you can override this with ``spack install --no-checksum``).
-
-Spack can currently support checksums using the MD5, SHA-1, SHA-224,
-SHA-256, SHA-384, and SHA-512 algorithms. It determines the algorithm
-to use based on the hash length.
^^^^^^^^^^^^^
``spack md5``
@@ -584,39 +672,6 @@ call to your package with parameters indicating the repository URL and
any branch, tag, or revision to fetch. See below for the parameters
you'll need for each VCS system.
-^^^^^^^^^^^^^^^^^^^^^^^^^
-Repositories and versions
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The package author is responsible for coming up with a sensible name
-for each version to be fetched from a repository. For example, if
-you're fetching from a tag like ``v1.0``, you might call that ``1.0``.
-If you're fetching a nameless git commit or an older subversion
-revision, you might give the commit an intuitive name, like ``develop``
-for a development version, or ``some-fancy-new-feature`` if you want
-to be more specific.
-
-In general, it's recommended to fetch tags or particular
-commits/revisions, NOT branches or the repository mainline, as
-branches move forward over time and you aren't guaranteed to get the
-same thing every time you fetch a particular version. Life isn't
-always simple, though, so this is not strictly enforced.
-
-When fetching from from the branch corresponding to the development version
-(often called ``master``, ``trunk``, or ``dev``), it is recommended to
-call this version ``develop``. Spack has special treatment for this version so
-that ``@develop`` will satisfy dependencies like
-``depends_on(abc, when="@x.y.z:")``. In other words, ``@develop`` is
-greater than any other version. The rationale is that certain features or
-options first appear in the development branch. Therefore if a package author
-wants to keep the package on the bleeding edge and provide support for new
-features, it is advised to use ``develop`` for such a version which will
-greatly simplify writing dependencies and version-related conditionals.
-
-In some future release, Spack may support extrapolating repository
-versions as it does for tarball URLs, but currently this is not
-supported.
-
.. _git-fetch:
^^^
@@ -642,8 +697,7 @@ Default branch
...
version('develop', git='https://github.com/example-project/example.git')
- This is not recommended, as the contents of the default branch
- change over time.
+ This download method is untrusted, and is not recommended.
Tags
To fetch from a particular tag, use the ``tag`` parameter along with
@@ -654,6 +708,8 @@ Tags
version('1.0.1', git='https://github.com/example-project/example.git',
tag='v1.0.1')
+ This download method is untrusted, and is not recommended.
+
Branches
To fetch a particular branch, use ``branch`` instead:
@@ -662,8 +718,7 @@ Branches
version('experimental', git='https://github.com/example-project/example.git',
branch='experimental')
- This is not recommended, as the contents of branches change over
- time.
+ This download method is untrusted, and is not recommended.
Commits
Finally, to fetch a particular commit, use ``commit``:
@@ -681,6 +736,9 @@ Commits
version('2014-10-08', git='https://github.com/example-project/example.git',
commit='9d38cd')
+ This download method *is trusted*. It is the recommended way to
+ securely download from a Git repository.
+
It may be useful to provide a saner version for commits like this,
e.g. you might use the date as the version, as done above. Or you
could just use the abbreviated commit hash. It's up to the package
@@ -696,19 +754,24 @@ Submodules
version('1.0.1', git='https://github.com/example-project/example.git',
tag='v1.0.1', submdoules=True)
-^^^^^^^^^^
-Installing
-^^^^^^^^^^
-You can fetch and install any of the versions above as you'd expect,
-by using ``@<version>`` in a spec:
+.. _github-fetch:
-.. code-block:: console
+""""""
+GitHub
+""""""
- $ spack install example@2014-10-08
+If a project is hosted on GitHub, *any* valid Git branch, tag or hash
+may be downloaded as a tarball. This is accomplished simply by
+constructing an appropriate URL. Spack can checksum any package
+downloaded this way, thereby producing a trusted download. For
+example, the following downloads a particular hash, and then applies a
+checksum.
-Git and other VCS versions will show up in the list of versions when
-a user runs ``spack info <package name>``.
+.. code-block:: python
+
+ version('1.9.5.1.1', 'd035e4bc704d136db79b43ab371b27d2',
+ url='https://www.github.com/jswhit/pyproj/tarball/0be612cc9f972e38b50a90c946a9b353e2ab140f')
.. _hg-fetch:
@@ -726,8 +789,7 @@ Default
version('develop', hg='https://jay.grs.rwth-aachen.de/hg/example')
- Note that this is not recommended; try to fetch a particular
- revision instead.
+ This download method is untrusted, and is not recommended.
Revisions
Add ``hg`` and ``revision`` parameters:
@@ -737,6 +799,8 @@ Revisions
version('1.0', hg='https://jay.grs.rwth-aachen.de/hg/example',
revision='v1.0')
+ This download method is untrusted, and is not recommended.
+
Unlike ``git``, which has special parameters for different types of
revisions, you can use ``revision`` for branches, tags, and commits
when you fetch with Mercurial.
@@ -759,7 +823,7 @@ Fetching the head
version('develop', svn='https://outreach.scidac.gov/svn/libmonitor/trunk')
- This is not recommended, as the head will move forward over time.
+ This download method is untrusted, and is not recommended.
Fetching a revision
To fetch a particular revision, add a ``revision`` to the
@@ -770,6 +834,8 @@ Fetching a revision
version('develop', svn='https://outreach.scidac.gov/svn/libmonitor/trunk',
revision=128)
+ This download method is untrusted, and is not recommended.
+
Subversion branches are handled as part of the directory structure, so
you can check out a branch or tag by changing the ``url``.
@@ -1345,31 +1411,34 @@ Additionally, dependencies may be specified for specific use cases:
The dependency types are:
- * **"build"**: The dependency package is made available during the
- package's build. While the package is built, the dependency
- package's install directory will be added to ``PATH``, the
- compiler include and library paths, as well as ``PYTHONPATH``.
- This only applies during this package's build; other packages
- which depend on this one will not know about the dependency
- package. In other words, building another project Y doesn't know
- about this project X's build dependencies.
- * **"link"**: The dependency package is linked against by this
- package, presumably via shared libraries. The dependency package
- will be added to this package's run-time library search path
- ``rpath``.
- * **"run"**: The dependency package is used by this package at run
- time. The dependency package will be added to both ``PATH`` and
- ``PYTHONPATH`` at run time, but not during build time. **"link"**
- and **"run"** are similar in that they both describe a dependency
- that exists when the package is used, but they differ in the
- mechanism: **"link"** is via shared libraries, and **"run"** via
- an explicit search.
-
-If not specified, ``type`` is assumed to be ``("build", "link")``.
-This is the common case for compiled language usage. Also available
-are the aliases ``"alldeps"`` for all dependency types combined, and
-``"nolink"`` (``("build", "run")``) for use by dependencies which are
-not expressed via a linker (e.g., Python or Lua module loading).
+ * **"build"**: made available during the project's build. The package will
+ be added to ``PATH``, the compiler include paths, and ``PYTHONPATH``.
+ Other projects which depend on this one will not have these modified
+ (building project X doesn't need project Y's build dependencies).
+ * **"link"**: the project is linked to by the project. The package will be
+ added to the current package's ``rpath``.
+ * **"run"**: the project is used by the project at runtime. The package will
+ be added to ``PATH`` and ``PYTHONPATH``.
+
+Additional hybrid dependency types are (note the lack of quotes):
+
+ * **<not specified>**: ``type`` assumed to be ``("build",
+ "link")``. This is the common case for compiled language usage.
+ * **alldeps**: All dependency types. **Note:** No quotes here
+ * **nolink**: Equal to ``("build", "run")``, for use by dependencies
+ that are not expressed via a linker (e.g., Python or Lua module
+ loading). **Note:** No quotes here
+
+"""""""""""""""""""
+Dependency Formulas
+"""""""""""""""""""
+
+This section shows how to write appropriate ``depends_on()``
+declarations for some common cases.
+
+* Python 2 only: ``depends_on('python@:2.8')``
+* Python 2.7 only: ``depends_on('python@2.7:2.8')``
+* Python 3 only: ``depends_on('python@3:')``
.. _setup-dependent-environment:
@@ -1458,6 +1527,17 @@ Now, the ``py-numpy`` package can be used as an argument to ``spack
activate``. When it is activated, all the files in its prefix will be
symbolically linked into the prefix of the python package.
+Some packages produce a Python extension, but are only compatible with
+Python 3, or with Python 2. In those cases, a ``depends_on()``
+declaration should be made in addition to the ``extends()``
+declaration:
+
+.. code-block:: python
+
+ class Icebin(Package):
+ extends('python', when='+python')
+ depends_on('python@3:', when='+python')
+
Many packages produce Python extensions for *some* variants, but not
others: they should extend ``python`` only if the appropriate
variant(s) are selected. This may be accomplished with conditional
@@ -1817,6 +1897,46 @@ See the :ref:`concretization-preferences` section for more details.
.. _install-method:
+------------------
+Inconsistent Specs
+------------------
+
+Suppose a user needs to install package C, which depends on packages A
+and B. Package A builds a library with a Python2 extension, and
+package B builds a library with a Python3 extension. Packages A and B
+cannot be loaded together in the same Python runtime:
+
+.. code-block:: python
+
+ class A(Package):
+ variant('python', default=True, 'enable python bindings')
+ depends_on('python@2.7', when='+python')
+ def install(self, spec, prefix):
+ # do whatever is necessary to enable/disable python
+ # bindings according to variant
+
+ class B(Package):
+ variant('python', default=True, 'enable python bindings')
+ depends_on('python@3.2:', when='+python')
+ def install(self, spec, prefix):
+ # do whatever is necessary to enable/disable python
+ # bindings according to variant
+
+Package C needs to use the libraries from packages A and B, but does
+not need either of the Python extensions. In this case, package C
+should simply depend on the ``~python`` variant of A and B:
+
+.. code-block:: python
+
+ class C(Package):
+ depends_on('A~python')
+ depends_on('B~python')
+
+This may require that A or B be built twice, if the user wishes to use
+the Python extensions provided by them: once for ``+python`` and once
+for ``~python``. Other than using a little extra disk space, that
+solution has no serious problems.
+
-----------------------------------
Implementing the ``install`` method
-----------------------------------
@@ -3027,15 +3147,15 @@ might write:
CXXFLAGS += -I$DWARF_PREFIX/include
CXXFLAGS += -L$DWARF_PREFIX/lib
-----------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Build System Configuration Support
-----------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Imagine a developer creating a CMake or Autotools-based project in a local
-directory, which depends on libraries A-Z. Once Spack has installed
-those dependencies, one would like to run ``cmake`` with appropriate
-command line and environment so CMake can find them. The ``spack
-setup`` command does this conveniently, producing a CMake
+Imagine a developer creating a CMake or Autotools-based project in a
+local directory, which depends on libraries A-Z. Once Spack has
+installed those dependencies, one would like to run ``cmake`` with
+appropriate command line and environment so CMake can find them. The
+``spack setup`` command does this conveniently, producing a CMake
configuration that is essentially the same as how Spack *would have*
configured the project. This can be demonstrated with a usage
example:
diff --git a/lib/spack/docs/workflows.rst b/lib/spack/docs/workflows.rst
new file mode 100644
index 0000000000..b879fed124
--- /dev/null
+++ b/lib/spack/docs/workflows.rst
@@ -0,0 +1,1208 @@
+=========
+Workflows
+=========
+
+The process of using Spack involves building packages, running
+binaries from those packages, and developing software that depends on
+those packages. For example, one might use Spack to build the
+``netcdf`` package, use ``spack load`` to run the ``ncdump`` binary, and
+finally, write a small C program to read/write a particular NetCDF file.
+
+Spack supports a variety of workflows to suit a variety of situations
+and user preferences, there is no single way to do all these things.
+This chapter demonstrates different workflows that have been
+developed, pointing out the pros and cons of them.
+
+-----------
+Definitions
+-----------
+
+First some basic definitions.
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Package, Concrete Spec, Installed Package
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In Spack, a package is an abstract recipe to build one piece of software.
+Spack packages may be used to build, in principle, any version of that
+software with any set of variants. Examples of packages include
+``curl`` and ``zlib``.
+
+A package may be *instantiated* to produce a concrete spec; one
+possible realization of a particular package, out of combinatorially
+many other realizations. For example, here is a concrete spec
+instantiated from ``curl``:
+
+.. code-block:: console
+
+ curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
+
+Spack's core concretization algorithm generates concrete specs by
+instantiating packages from its repo, based on a set of "hints",
+including user input and the ``packages.yaml`` file. This algorithm
+may be accessed at any time with the ``spack spec`` command. For
+example:
+
+.. code-block:: console
+
+ $ spack spec curl
+ curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
+
+Every time Spack installs a package, that installation corresponds to
+a concrete spec. Only a vanishingly small fraction of possible
+concrete specs will be installed at any one Spack site.
+
+^^^^^^^^^^^^^^^
+Consistent Sets
+^^^^^^^^^^^^^^^
+
+A set of Spack specs is said to be *consistent* if each package is
+only instantiated one way within it --- that is, if two specs in the
+set have the same package, then they must also have the same version,
+variant, compiler, etc. For example, the following set is consistent:
+
+.. code-block:: console
+
+ curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
+
+The following set is not consistent:
+
+.. code-block:: console
+
+ curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^openssl@system%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ ^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ zlib@1.2.7%gcc@5.3.0 arch=linux-SuSE11-x86_64
+
+The compatibility of a set of installed packages determines what may
+be done with it. It is always possible to ``spack load`` any set of
+installed packages, whether or not they are consistent, and run their
+binaries from the command line. However, a set of installed packages
+can only be linked together in one binary if it is consistent.
+
+If the user produces a series of ``spack spec`` or ``spack load``
+commands, in general there is no guarantee of consistency between
+them. Spack's concretization procedure guarantees that the results of
+any *single* ``spack spec`` call will be consistent. Therefore, the
+best way to ensure a consistent set of specs is to create a Spack
+package with dependencies, and then instantiate that package. We will
+use this technique below.
+
+-----------------
+Building Packages
+-----------------
+
+Suppose you are tasked with installing a set of software packages on a
+system in order to support one application -- both a core application
+program, plus software to prepare input and analyze output. The
+required software might be summed up as a series of ``spack install``
+commands placed in a script. If needed, this script can always be run
+again in the future. For example:
+
+.. code-block:: sh
+
+ #!/bin/sh
+ spack install modele-utils
+ spack install emacs
+ spack install ncview
+ spack install nco
+ spack install modele-control
+ spack install py-numpy
+
+In most cases, this script will not correctly install software
+according to your specific needs: choices need to be made for
+variants, versions and virtual dependency choices may be needed. It
+*is* possible to specify these choices by extending specs on the
+command line; however, the same choices must be specified repeatedly.
+For example, if you wish to use ``openmpi`` to satisfy the ``mpi``
+dependency, then ``^openmpi`` will have to appear on *every* ``spack
+install`` line that uses MPI. It can get repetitive fast.
+
+Customizing Spack installation options is easier to do in the
+``~/.spack/packages.yaml`` file. In this file, you can specify
+preferred versions and variants to use for packages. For example:
+
+.. code-block:: yaml
+
+ packages:
+ python:
+ version: [3.5.1]
+ modele-utils:
+ version: [cmake]
+
+ everytrace:
+ version: [develop]
+ eigen:
+ variants: ~suitesparse
+ netcdf:
+ variants: +mpi
+
+ all:
+ compiler: [gcc@5.3.0]
+ providers:
+ mpi: [openmpi]
+ blas: [openblas]
+ lapack: [openblas]
+
+
+This approach will work as long as you are building packages for just
+one application.
+
+^^^^^^^^^^^^^^^^^^^^^
+Multiple Applications
+^^^^^^^^^^^^^^^^^^^^^
+
+Suppose instead you're building multiple inconsistent applications.
+For example, users want package A to be built with ``openmpi`` and
+package B with ``mpich`` --- but still share many other lower-level
+dependencies. In this case, a single ``packages.yaml`` file will not
+work. Plans are to implement *per-project* ``packages.yaml`` files.
+In the meantime, one could write shell scripts to switch
+``packages.yaml`` between multiple versions as needed, using symlinks.
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Combinatorial Sets of Installs
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Suppose that you are now tasked with systematically building many
+incompatible versions of packages. For example, you need to build
+``petsc`` 9 times for 3 different MPI implementations on 3 different
+compilers, in order to support user needs. In this case, you will
+need to either create 9 different ``packages.yaml`` files; or more
+likely, create 9 different ``spack install`` command lines with the
+correct options in the spec. Here is a real-life example of this kind
+of usage:
+
+.. code-block:: sh
+
+ #!/bin/sh
+ #
+
+ compilers=(
+ %gcc
+ %intel
+ %pgi
+ )
+
+ mpis=(
+ openmpi+psm~verbs
+ openmpi~psm+verbs
+ mvapich2+psm~mrail
+ mvapich2~psm+mrail
+ mpich+verbs
+ )
+
+ for compiler in "${compilers[@]}"
+ do
+ # Serial installs
+ spack install szip $compiler
+ spack install hdf $compiler
+ spack install hdf5 $compiler
+ spack install netcdf $compiler
+ spack install netcdf-fortran $compiler
+ spack install ncview $compiler
+
+ # Parallel installs
+ for mpi in "${mpis[@]}"
+ do
+ spack install $mpi $compiler
+ spack install hdf5~cxx+mpi $compiler ^$mpi
+ spack install parallel-netcdf $compiler ^$mpi
+ done
+ done
+
+
+
+
+
+------------------------------
+Running Binaries from Packages
+------------------------------
+
+Once Spack packages have been built, the next step is to use them. As
+with building packages, there are many ways to use them, depending on
+the use case.
+
+^^^^^^^^^^^^
+Find and Run
+^^^^^^^^^^^^
+
+The simplest way to run a Spack binary is to find it and run it!
+In many cases, nothing more is needed because Spack builds binaries
+with RPATHs. Spack installation directories may be found with ``spack
+location -i`` commands. For example:
+
+.. code-block:: console
+
+ $ spack location -i cmake
+ /home/me/spack2/opt/spack/linux-SuSE11-x86_64/gcc-5.3.0/cmake-3.6.0-7cxrynb6esss6jognj23ak55fgxkwtx7
+
+This gives the root of the Spack package; relevant binaries may be
+found within it. For example:
+
+.. code-block:: console
+
+ $ CMAKE=`spack location -i cmake`/bin/cmake
+
+
+Standard UNIX tools can find binaries as well. For example:
+
+.. code-block:: console
+
+ $ find ~/spack2/opt -name cmake | grep bin
+ /home/me/spack2/opt/spack/linux-SuSE11-x86_64/gcc-5.3.0/cmake-3.6.0-7cxrynb6esss6jognj23ak55fgxkwtx7/bin/cmake
+
+These methods are suitable, for example, for setting up build
+processes or GUIs that need to know the location of particular tools.
+However, other more powerful methods are generally preferred for user
+environments.
+
+
+^^^^^^^^^^^^^^^^^^^^^^^
+Spack-Generated Modules
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Suppose that Spack has been used to install a set of command-line
+programs, which users now wish to use. One can in principle put a
+number of ``spack load`` commands into ``.bashrc``, for example, to
+load a set of Spack-generated modules:
+
+.. code-block:: sh
+
+ spack load modele-utils
+ spack load emacs
+ spack load ncview
+ spack load nco
+ spack load modele-control
+
+Although simple load scripts like this are useful in many cases, they
+have some drawbacks:
+
+1. The set of modules loaded by them will in general not be
+ consistent. They are a decent way to load commands to be called
+ from command shells. See below for better ways to assemble a
+ consistent set of packages for building application programs.
+
+2. The ``spack spec`` and ``spack install`` commands use a
+ sophisticated concretization algorithm that chooses the "best"
+ among several options, taking into account ``packages.yaml`` file.
+ The ``spack load`` and ``spack module loads`` commands, on the
+ other hand, are not very smart: if the user-supplied spec matches
+ more than one installed package, then ``spack module loads`` will
+ fail. This may change in the future. For now, the workaround is to
+ be more specific on any ``spack module loads`` lines that fail.
+
+
+""""""""""""""""""""""
+Generated Load Scripts
+""""""""""""""""""""""
+
+Another problem with using `spack load` is, it is slow; a typical user
+environment could take several seconds to load, and would not be
+appropriate to put into ``.bashrc`` directly. It is preferable to use
+a series of ``spack module loads`` commands to pre-compute which
+modules to load. These can be put in a script that is run whenever
+installed Spack packages change. For example:
+
+.. code-block:: sh
+
+ #!/bin/sh
+ #
+ # Generate module load commands in ~/env/spackenv
+
+ cat <<EOF | /bin/sh >$HOME/env/spackenv
+ FIND='spack module loads --prefix linux-SuSE11-x86_64/'
+
+ \$FIND modele-utils
+ \$FIND emacs
+ \$FIND ncview
+ \$FIND nco
+ \$FIND modele-control
+ EOF
+
+The output of this file is written in ``~/env/spackenv``:
+
+.. code-block:: sh
+
+ # binutils@2.25%gcc@5.3.0+gold~krellpatch~libiberty arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/binutils-2.25-gcc-5.3.0-6w5d2t4
+ # python@2.7.12%gcc@5.3.0~tk~ucs4 arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/python-2.7.12-gcc-5.3.0-2azoju2
+ # ncview@2.1.7%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/ncview-2.1.7-gcc-5.3.0-uw3knq2
+ # nco@4.5.5%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/nco-4.5.5-gcc-5.3.0-7aqmimu
+ # modele-control@develop%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/modele-control-develop-gcc-5.3.0-7rddsij
+ # zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/zlib-1.2.8-gcc-5.3.0-fe5onbi
+ # curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/curl-7.50.1-gcc-5.3.0-4vlev55
+ # hdf5@1.10.0-patch1%gcc@5.3.0+cxx~debug+fortran+mpi+shared~szip~threadsafe arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/hdf5-1.10.0-patch1-gcc-5.3.0-pwnsr4w
+ # netcdf@4.4.1%gcc@5.3.0~hdf4+mpi arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/netcdf-4.4.1-gcc-5.3.0-rl5canv
+ # netcdf-fortran@4.4.4%gcc@5.3.0 arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/netcdf-fortran-4.4.4-gcc-5.3.0-stdk2xq
+ # modele-utils@cmake%gcc@5.3.0+aux+diags+ic arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/modele-utils-cmake-gcc-5.3.0-idyjul5
+ # everytrace@develop%gcc@5.3.0+fortran+mpi arch=linux-SuSE11-x86_64
+ module load linux-SuSE11-x86_64/everytrace-develop-gcc-5.3.0-p5wmb25
+
+Users may now put ``source ~/env/spackenv`` into ``.bashrc``.
+
+.. note ::
+
+ Some module systems put a prefix on the names of modules created
+ by Spack. For example, that prefix is ``linux-SuSE11-x86_64/`` in
+ the above case. If a prefix is not needed, you may omit the
+ ``--prefix`` flag from ``spack module loads``.
+
+
+"""""""""""""""""""""""
+Transitive Dependencies
+"""""""""""""""""""""""
+
+In the script above, each ``spack module loads`` command generates a
+*single* ``module load`` line. Transitive dependencies do not usually
+need to be loaded, only modules the user needs in in ``$PATH``. This is
+because Spack builds binaries with RPATH. Spack's RPATH policy has
+some nice features:
+
+#. Modules for multiple inconsistent applications may be loaded
+ simultaneously. In the above example (Multiple Applications),
+ package A and package B can coexist together in the user's $PATH,
+ even though they use different MPIs.
+
+#. RPATH eliminates a whole class of strange errors that can happen
+ in non-RPATH binaries when the wrong ``LD_LIBRARY_PATH`` is
+ loaded.
+
+#. Recursive module systems such as LMod are not necessary.
+
+#. Modules are not needed at all to execute binaries. If a path to a
+ binary is known, it may be executed. For example, the path for a
+ Spack-built compiler can be given to an IDE without requiring the
+ IDE to load that compiler's module.
+
+Unfortunately, Spack's RPATH support does not work in all case. For example:
+
+#. Software comes in many forms --- not just compiled ELF binaries,
+ but also as interpreted code in Python, R, JVM bytecode, etc.
+ Those systems almost universally use an environment variable
+ analogous to ``LD_LIBRARY_PATH`` to dynamically load libraries.
+
+#. Although Spack generally builds binaries with RPATH, it does not
+ currently do so for compiled Python extensions (for example,
+ ``py-numpy``). Any libraries that these extensions depend on
+ (``blas`` in this case, for example) must be specified in the
+ ``LD_LIBRARY_PATH``.`
+
+#. In some cases, Spack-generated binaries end up without a
+ functional RPATH for no discernible reason.
+
+In cases where RPATH support doesn't make things "just work," it can
+be necessary to load a module's dependencies as well as the module
+itself. This is done by adding the ``--dependencies`` flag to the
+``spack module loads`` command. For example, the following line,
+added to the script above, would be used to load SciPy, along with
+Numpy, core Python, BLAS/LAPACK and anything else needed:
+
+.. code-block:: sh
+
+ spack module loads --dependencies py-scipy
+
+^^^^^^^^^^^^^^^^^^
+Extension Packages
+^^^^^^^^^^^^^^^^^^
+
+:ref:`packaging_extensions` may be used as an alternative to loading
+Python (and similar systems) packages directly. If extensions are
+activated, then ``spack load python`` will also load all the
+extensions activated for the given ``python``. This reduces the need
+for users to load a large number of modules.
+
+However, Spack extensions have two potential drawbacks:
+
+#. Activated packages that involve compiled C extensions may still
+ need their dependencies to be loaded manually. For example,
+ ``spack load openblas`` might be required to make ``py-numpy``
+ work.
+
+#. Extensions "break" a core feature of Spack, which is that multiple
+ versions of a package can co-exist side-by-side. For example,
+ suppose you wish to run a Python package in two different
+ environments but the same basic Python --- one with
+ ``py-numpy@1.7`` and one with ``py-numpy@1.8``. Spack extensions
+ will not support this potential debugging use case.
+
+
+^^^^^^^^^^^^^^
+Dummy Packages
+^^^^^^^^^^^^^^
+
+As an alternative to a series of ``module load`` commands, one might
+consider dummy packages as a way to create a *consistent* set of
+packages that may be loaded as one unit. The idea here is pretty
+simple:
+
+#. Create a package (say, ``mydummy``) with no URL and no
+ ``install()`` method, just dependencies.
+
+#. Run ``spack install mydummy`` to install.
+
+An advantage of this method is the set of packages produced will be
+consistent. This means that you can reliably build software against
+it. A disadvantage is the set of packages will be consistent; this
+means you cannot load up two applications this way if they are not
+consistent with each other.
+
+^^^^^^^^^^^^^^^^
+Filesystem Views
+^^^^^^^^^^^^^^^^
+
+Filesystem views offer an alternative to environment modules, another
+way to assemble packages in a useful way and load them into a user's
+environment.
+
+A filesystem view is a single directory tree that is the union of the
+directory hierarchies of a number of installed packages; it is similar
+to the directory hiearchy that might exist under ``/usr/local``. The
+files of the view's installed packages are brought into the view by
+symbolic or hard links, referencing the original Spack installation.
+
+When software is built and installed, absolute paths are frequently
+"baked into" the software, making it non-relocatable. This happens
+not just in RPATHs, but also in shebangs, configuration files, and
+assorted other locations.
+
+Therefore, programs run out of a Spack view will typically still look
+in the original Spack-installed location for shared libraries and
+other resources. This behavior is not easily changed; in general,
+there is no way to know where absolute paths might be written into an
+installed package, and how to relocate it. Therefore, the original
+Spack tree must be kept in place for a filesystem view to work, even
+if the view is built with hardlinks.
+
+.. FIXME: reference the relocation work of Hegner and Gartung (PR #1013)
+
+
+""""""""""""""""""""""
+Using Filesystem Views
+""""""""""""""""""""""
+
+A filesystem view is created, and packages are linked in, by the ``spack
+view`` command's ``symlink`` and ``hardlink`` sub-commands. The
+``spack view remove`` command can be used to unlink some or all of the
+filesystem view.
+
+The following example creates a filesystem view based
+on an installed ``cmake`` package and then removes from the view the
+files in the ``cmake`` package while retaining its dependencies.
+
+.. code-block:: console
+
+ $ spack view --verbose symlink myview cmake@3.5.2
+ ==> Linking package: "ncurses"
+ ==> Linking package: "zlib"
+ ==> Linking package: "openssl"
+ ==> Linking package: "cmake"
+
+ $ ls myview/
+ bin doc etc include lib share
+
+ $ ls myview/bin/
+ captoinfo clear cpack ctest infotocap openssl tabs toe tset
+ ccmake cmake c_rehash infocmp ncurses6-config reset tic tput
+
+ $ spack view --verbose --dependencies false rm myview cmake@3.5.2
+ ==> Removing package: "cmake"
+
+ $ ls myview/bin/
+ captoinfo c_rehash infotocap openssl tabs toe tset
+ clear infocmp ncurses6-config reset tic tput
+
+.. note::
+
+ If the set of packages being included in a view is inconsistent,
+ then it is possible that two packages will provide the same file. Any
+ conflicts of this type are handled on a first-come-first-served basis,
+ and a warning is printed.
+
+.. note::
+
+ When packages are removed from a view, empty directories are
+ purged.
+
+""""""""""""""""""
+Fine-Grain Control
+""""""""""""""""""
+
+The ``--exclude`` and ``--dependencies`` option flags allow for
+fine-grained control over which packages and dependencies do or not
+get included in a view. For example, suppose you are developing the
+``appsy`` package. You wish to build against a view of all ``appsy``
+dependencies, but not ``appsy`` itself:
+
+.. code-block:: console
+
+ $ spack view symlink --dependencies yes --exclude appsy appsy
+
+Alternately, you wish to create a view whose purpose is to provide
+binary executables to end users. You only need to include
+applications they might want, and not those applications'
+dependencies. In this case, you might use:
+
+.. code-block:: console
+
+ $ spack view symlink --dependencies no cmake
+
+
+"""""""""""""""""""""""
+Hybrid Filesystem Views
+"""""""""""""""""""""""
+
+Although filesystem views are usually created by Spack, users are free
+to add to them by other means. For example, imagine a filesystem
+view, created by Spack, that looks something like:
+
+.. code-block:: console
+
+ /path/to/MYVIEW/bin/programA -> /path/to/spack/.../bin/programA
+ /path/to/MYVIEW/lib/libA.so -> /path/to/spack/.../lib/libA.so
+
+Now, the user may add to this view by non-Spack means; for example, by
+running a classic install script. For example:
+
+.. code-block:: console
+
+ $ tar -xf B.tar.gz
+ $ cd B/
+ $ ./configure --prefix=/path/to/MYVIEW \
+ --with-A=/path/to/MYVIEW
+ $ make && make install
+
+The result is a hybrid view:
+
+.. code-block:: console
+
+ /path/to/MYVIEW/bin/programA -> /path/to/spack/.../bin/programA
+ /path/to/MYVIEW/bin/programB
+ /path/to/MYVIEW/lib/libA.so -> /path/to/spack/.../lib/libA.so
+ /path/to/MYVIEW/lib/libB.so
+
+In this case, real files coexist, interleaved with the "view"
+symlinks. At any time one can delete ``/path/to/MYVIEW`` or use
+``spack view`` to manage it surgically. None of this will affect the
+real Spack install area.
+
+
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Discussion: Running Binaries
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Modules, extension packages and filesystem views are all ways to
+assemble sets of Spack packages into a useful environment. They are
+all semantically similar, in that conflicting installed packages
+cannot simultaneously be loaded, activated or included in a view.
+
+With all of these approaches, there is no guarantee that the
+environment created will be consistent. It is possible, for example,
+to simultaneously load application A that uses OpenMPI and application
+B that uses MPICH. Both applications will run just fine in this
+inconsistent environment because they rely on RPATHs, not the
+environment, to find their dependencies.
+
+In general, environments set up using modules vs. views will work
+similarly. Both can be used to set up ephemeral or long-lived
+testing/development environments. Operational differences between the
+two approaches can make one or the other preferable in certain
+environments:
+
+* Filesystem views do not require environment module infrastructure.
+ Although Spack can install ``environment-modules``, users might be
+ hostile to its use. Filesystem views offer a good solution for
+ sysadmins serving users who just "want all the stuff I need in one
+ place" and don't want to hear about Spack.
+
+* Although modern build systems will find dependencies wherever they
+ might be, some applications with hand-built make files expect their
+ dependencies to be in one place. One common problem is makefiles
+ that assume that ``netcdf`` and ``netcdf-fortran`` are installed in
+ the same tree. Or, one might use an IDE that requires tedious
+ configuration of dependency paths; and it's easier to automate that
+ administration in a view-building script than in the IDE itself.
+ For all these cases, a view will be preferable to other ways to
+ assemble an environment.
+
+* On systems with I-node quotas, modules might be preferable to views
+ and extension packages.
+
+* Views and activated extensions maintain state that is semantically
+ equivalent to the information in a ``spack module loads`` script.
+ Administrators might find things easier to maintain without the
+ added "heavyweight" state of a view.
+
+==============================
+Developing Software with Spack
+==============================
+
+For any project, one needs to assemble an
+environment of that application's dependencies. You might consider
+loading a series of modules or creating a filesystem view. This
+approach, while obvious, has some serious drawbacks:
+
+1. There is no guarantee that an environment created this way will be
+ consistent. Your application could end up with dependency A
+ expecting one version of MPI, and dependency B expecting another.
+ The linker will not be happy...
+
+2. Suppose you need to debug a package deep within your software DAG.
+ If you build that package with a manual environment, then it
+ becomes difficult to have Spack auto-build things that depend on
+ it. That could be a serious problem, depending on how deep the
+ package in question is in your dependency DAG.
+
+3. At its core, Spack is a sophisticated concretization algorithm that
+ matches up packages with appropriate dependencies and creates a
+ *consistent* environment for the package it's building. Writing a
+ list of ``spack load`` commands for your dependencies is at least
+ as hard as writing the same list of ``depends_on()`` declarations
+ in a Spack package. But it makes no use of Spack concretization
+ and is more error-prone.
+
+4. Spack provides an automated, systematic way not just to find a
+ packages's dependencies --- but also to build other packages on
+ top. Any Spack package can become a dependency for another Spack
+ package, offering a powerful vision of software re-use. If you
+ build your package A outside of Spack, then your ability to use it
+ as a building block for other packages in an automated way is
+ diminished: other packages depending on package A will not
+ be able to use Spack to fulfill that dependency.
+
+5. If you are reading this manual, you probably love Spack. You're
+ probably going to write a Spack package for your software so
+ prospective users can install it with the least amount of pain.
+ Why should you go to additional work to find dependencies in your
+ development environment? Shouldn't Spack be able to help you build
+ your software based on the package you've already written?
+
+In this section, we show how Spack can be used in the software
+development process to greatest effect, and how development packages
+can be seamlessly integrated into the Spack ecosystem. We will show
+how this process works by example, assuming the software you are
+creating is called ``mylib``.
+
+
+---------------------
+Write the CMake Build
+---------------------
+
+For now, the techniques in this section only work for CMake-based
+projects, although they could be easily extended to other build
+systems in the future. We will therefore assume you are using CMake
+to build your project.
+
+The ``CMakeLists.txt`` file should be written as normal. A few caveats:
+
+1. Your project should produce binaries with RPATHs. This will ensure
+ that they work the same whether built manually or automatically by
+ Spack. For example:
+
+.. code-block:: cmake
+
+ # enable @rpath in the install name for any shared library being built
+ # note: it is planned that a future version of CMake will enable this by default
+ set(CMAKE_MACOSX_RPATH 1)
+
+ # Always use full RPATH
+ # http://www.cmake.org/Wiki/CMake_RPATH_handling
+ # http://www.kitware.com/blog/home/post/510
+
+ # use, i.e. don't skip the full RPATH for the build tree
+ SET(CMAKE_SKIP_BUILD_RPATH FALSE)
+
+ # when building, don't use the install RPATH already
+ # (but later on when installing)
+ SET(CMAKE_BUILD_WITH_INSTALL_RPATH FALSE)
+
+ # add the automatically determined parts of the RPATH
+ # which point to directories outside the build tree to the install RPATH
+ SET(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
+
+ # the RPATH to be used when installing, but only if it's not a system directory
+ LIST(FIND CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES "${CMAKE_INSTALL_PREFIX}/lib" isSystemDir)
+ IF("${isSystemDir}" STREQUAL "-1")
+ SET(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib")
+ ENDIF("${isSystemDir}" STREQUAL "-1")
+
+
+2. Spack provides a CMake variable called
+ ``SPACK_TRANSITIVE_INCLUDE_PATH``, which contains the ``include/``
+ directory for all of your project's transitive dependencies. It
+ can be useful if your project ``#include``s files from package B,
+ which ``#include`` files from package C, but your project only
+ lists project B as a dependency. This works in traditional
+ single-tree build environments, in which B and C's include files
+ live in the same place. In order to make it work with Spack as
+ well, you must add the following to ``CMakeLists.txt``. It will
+ have no effect when building without Spack:
+
+ .. code-block:: cmake
+
+ # Include all the transitive dependencies determined by Spack.
+ # If we're not running with Spack, this does nothing...
+ include_directories($ENV{SPACK_TRANSITIVE_INCLUDE_PATH})
+
+ .. note::
+
+ Note that this feature is controversial and could break with
+ future versions of GNU ld. The best practice is to make sure
+ anything you ``#include`` is listed as a dependency in your
+ CMakeLists.txt (and Spack package).
+
+.. _write-the-spack-package:
+
+-----------------------
+Write the Spack Package
+-----------------------
+
+The Spack package also needs to be written, in tandem with setting up
+the build (for example, CMake). The most important part of this task
+is declaring dependencies. Here is an example of the Spack package
+for the ``mylib`` package (ellipses for brevity):
+
+.. code-block:: python
+
+ class Mylib(CMakePackage):
+ """Misc. reusable utilities used by Myapp."""
+
+ homepage = "https://github.com/citibeth/mylib"
+ url = "https://github.com/citibeth/mylib/tarball/123"
+
+ version('0.1.2', '3a6acd70085e25f81b63a7e96c504ef9')
+ version('develop', git='https://github.com/citibeth/mylib.git',
+ branch='develop')
+
+ variant('everytrace', default=False,
+ description='Report errors through Everytrace')
+ ...
+
+ extends('python')
+
+ depends_on('eigen')
+ depends_on('everytrace', when='+everytrace')
+ depends_on('proj', when='+proj')
+ ...
+ depends_on('cmake', type='build')
+ depends_on('doxygen', type='build')
+
+ def configure_args(self):
+ spec = self.spec
+ return [
+ '-DUSE_EVERYTRACE=%s' % ('YES' if '+everytrace' in spec else 'NO'),
+ '-DUSE_PROJ4=%s' % ('YES' if '+proj' in spec else 'NO'),
+ ...
+ '-DUSE_UDUNITS2=%s' % ('YES' if '+udunits2' in spec else 'NO'),
+ '-DUSE_GTEST=%s' % ('YES' if '+googletest' in spec else 'NO')]
+
+This is a standard Spack package that can be used to install
+``mylib`` in a production environment. The list of dependencies in
+the Spack package will generally be a repeat of the list of CMake
+dependencies. This package also has some features that allow it to be
+used for development:
+
+1. It subclasses ``CMakePackage`` instead of ``Package``. This
+ eliminates the need to write an ``install()`` method, which is
+ defined in the superclass. Instead, one just needs to write the
+ ``configure_args()`` method. That method should return the
+ arguments needed for the ``cmake`` command (beyond the standard
+ CMake arguments, which Spack will include already). These
+ arguments are typically used to turn features on/off in the build.
+
+2. It specifies a non-checksummed version ``develop``. Running
+ ``spack install mylib@develop`` the ``@develop`` version will
+ install the latest version off the develop branch. This method of
+ download is useful for the developer of a project while it is in
+ active development; however, it should only be used by developers
+ who control and trust the repository in question!
+
+3. The ``url``, ``url_for_version()`` and ``homepage`` attributes are
+ not used in development. Don't worry if you don't have any, or if
+ they are behind a firewall.
+
+----------------
+Build with Spack
+----------------
+
+Now that you have a Spack package, you can use Spack to find its
+dependencies automatically. For example:
+
+.. code-block:: console
+
+ $ cd mylib
+ $ spack setup mylib@local
+
+The result will be a file ``spconfig.py`` in the top-level
+``mylib/`` directory. It is a short script that calls CMake with the
+dependencies and options determined by Spack --- similar to what
+happens in ``spack install``, but now written out in script form.
+From a developer's point of view, you can think of ``spconfig.py`` as
+a stand-in for the ``cmake`` command.
+
+.. note::
+
+ You can invent any "version" you like for the ``spack setup``
+ command.
+
+.. note::
+
+ Although ``spack setup`` does not build your package, it does
+ create and install a module file, and mark in the database that
+ your package has been installed. This can lead to errors, of
+ course, if you don't subsequently install your package.
+ Also... you will need to ``spack uninstall`` before you run
+ ``spack setup`` again.
+
+
+You can now build your project as usual with CMake:
+
+.. code-block:: console
+
+ $ mkdir build; cd build
+ $ ../spconfig.py .. # Instead of cmake ..
+ $ make
+ $ make install
+
+Once your ``make install`` command is complete, your package will be
+installed, just as if you'd run ``spack install``. Except you can now
+edit, re-build and re-install as often as needed, without checking
+into Git or downloading tarballs.
+
+.. note::
+
+ The build you get this way will be *almost* the same as the build
+ from ``spack install``. The only difference is, you will not be
+ using Spack's compiler wrappers. This difference has not caused
+ problems in our experience, as long as your project sets
+ RPATHs as shown above. You DO use RPATHs, right?
+
+
+
+--------------------
+Build Other Software
+--------------------
+
+Now that you've built ``mylib`` with Spack, you might want to build
+another package that depends on it --- for example, ``myapp``. This
+is accomplished easily enough:
+
+.. code-block:: console
+
+ $ spack install myapp ^mylib@local
+
+Note that auto-built software has now been installed *on top of*
+manually-built software, without breaking Spack's "web." This
+property is useful if you need to debug a package deep in the
+dependency hierarchy of your application. It is a *big* advantage of
+using ``spack setup`` to build your package's environment.
+
+If you feel your software is stable, you might wish to install it with
+``spack install`` and skip the source directory. You can just use,
+for example:
+
+.. code-block:: console
+
+ $ spack install mylib@develop
+
+.. _release-your-software:
+
+---------------------
+Release Your Software
+---------------------
+
+You are now ready to release your software as a tarball with a
+numbered version, and a Spack package that can build it. If you're
+hosted on GitHub, this process will be a bit easier.
+
+#. Put tag(s) on the version(s) in your GitHub repo you want to be
+ release versions. For example, a tag ``v0.1.0`` for version 0.1.0.
+
+#. Set the ``url`` in your ``package.py`` to download a tarball for
+ the appropriate version. GitHub will give you a tarball for any
+ commit in the repo, if you tickle it the right way. For example:
+
+ .. code-block:: python
+
+ url = 'https://github.com/citibeth/mylib/tarball/v0.1.2'
+
+#. Use Spack to determine your version's hash, and cut'n'paste it into
+ your ``package.py``:
+
+ .. code-block:: console
+
+ $ spack checksum mylib 0.1.2
+ ==> Found 1 versions of mylib
+ 0.1.2 https://github.com/citibeth/mylib/tarball/v0.1.2
+
+ How many would you like to checksum? (default is 5, q to abort)
+ ==> Downloading...
+ ==> Trying to fetch from https://github.com/citibeth/mylib/tarball/v0.1.2
+ ######################################################################## 100.0%
+ ==> Checksummed new versions of mylib:
+ version('0.1.2', '3a6acd70085e25f81b63a7e96c504ef9')
+
+#. You should now be able to install released version 0.1.2 of your package with:
+
+ .. code-block:: console
+
+ $ spack install mylib@0.1.2
+
+#. There is no need to remove the `develop` version from your package.
+ Spack concretization will always prefer numbered version to
+ non-numeric versions. Users will only get it if they ask for it.
+
+
+
+------------------------
+Distribute Your Software
+------------------------
+
+Once you've released your software, other people will want to build
+it; and you will need to tell them how. In the past, that has meant a
+few paragraphs of pros explaining which dependencies to install. But
+now you use Spack, and those instructions are written in executable
+Python code. But your software has many dependencies, and you know
+Spack is the best way to install it:
+
+#. First, you will want to fork Spack's ``develop`` branch. Your aim
+ is to provide a stable version of Spack that you KNOW will install
+ your software. If you make changes to Spack in the process, you
+ will want to submit pull requests to Spack core.
+
+#. Add your software's ``package.py`` to that fork. You should submit
+ a pull request for this as well, unless you don't want the public
+ to know about your software.
+
+#. Prepare instructions that read approximately as follows:
+
+ #. Download Spack from your forked repo.
+
+ #. Install Spack; see :ref:`getting_started`.
+
+ #. Set up an appropriate ``packages.yaml`` file. You should tell
+ your users to include in this file whatever versions/variants
+ are needed to make your software work correctly (assuming those
+ are not already in your ``packages.yaml``).
+
+ #. Run ``spack install mylib``.
+
+ #. Run this script to generate the ``module load`` commands or
+ filesystem view needed to use this software.
+
+#. Be aware that your users might encounter unexpected bootstrapping
+ issues on their machines, especially if they are running on older
+ systems. The :ref:`getting_started` section should cover this, but
+ there could always be issues.
+
+-------------------
+Other Build Systems
+-------------------
+
+``spack setup`` currently only supports CMake-based builds, in
+packages that subclass ``CMakePackage``. The intent is that this
+mechanism should support a wider range of build systems; for example,
+GNU Autotools. Someone well-versed in Autotools is needed to develop
+this patch and test it out.
+
+Python Distutils is another popular build system that should get
+``spack setup`` support. For non-compiled languages like Python,
+``spack diy`` may be used. Even better is to put the source directory
+directly in the user's ``PYTHONPATH``. Then, edits in source files
+are immediately available to run without any install process at all!
+
+----------
+Conclusion
+----------
+
+The ``spack setup`` development workflow provides better automation,
+flexibility and safety than workflows relying on environment modules
+or filesystem views. However, it has some drawbacks:
+
+#. It currently works only with projects that use the CMake build
+ system. Support for other build systems is not hard to build, but
+ will require a small amount of effort for each build system to be
+ supported. It might not work well with some IDEs.
+
+#. It only works with packages that sub-class ``StagedPackage``.
+ Currently, most Spack packages do not. Converting them is not
+ hard; but must be done on a package-by-package basis.
+
+#. It requires that users are comfortable with Spack, as they
+ integrate Spack explicitly in their workflow. Not all users are
+ willing to do this.
+
+==================
+Upstream Bug Fixes
+==================
+
+It is not uncommon to discover a bug in an upstream project while
+trying to build with Spack. Typically, the bug is in a package that
+serves a dependency to something else. This section describes
+procedure to work around and ultimately resolve these bugs, while not
+delaying the Spack user's main goal.
+
+-----------------
+Buggy New Version
+-----------------
+
+Sometimes, the old version of a package works fine, but a new version
+is buggy. For example, it was once found that `Adios did not build
+with hdf5@1.10 <https://github.com/LLNL/spack/issues/1683>`_. If the
+old version of ``hdf5`` will work with ``adios``, the suggested
+procedure is:
+
+#. Revert ``adios`` to the old version of ``hdf5``. Put in its
+ ``adios/package.py``:
+
+ .. code-block:: python
+
+ # Adios does not build with HDF5 1.10
+ # See: https://github.com/LLNL/spack/issues/1683
+ depends_on('hdf5@:1.9')
+
+#. Determine whether the problem is with ``hdf5`` or ``adios``, and
+ report the problem to the appropriate upstream project. In this
+ case, the problem was with ``adios``.
+
+#. Once a new version of ``adios`` comes out with the bugfix, modify
+ ``adios/package.py`` to reflect it:
+
+ .. code-block:: python
+
+ # Adios up to v1.10.0 does not build with HDF5 1.10
+ # See: https://github.com/LLNL/spack/issues/1683
+ depends_on('hdf5@:1.9', when='@:1.10.0')
+ depends_on('hdf5', when='@1.10.1:')
+
+----------------
+No Version Works
+----------------
+
+Sometimes, *no* existing versions of a dependency work for a build.
+This typically happens when developing a new project: only then does
+the developer notice that existing versions of a dependency are all
+buggy, or the non-buggy versions are all missing a critical feature.
+
+In the long run, the upstream project will hopefully fix the bug and
+release a new version. But that could take a while, even if a bugfix
+has already been pushed to the project's repository. In the meantime,
+the Spack user needs things to work.
+
+The solution is to create an unofficial Spack release of the project,
+as soon as the bug is fixed in *some* repository. A study of the `Git
+history <https://github.com/citibeth/spack/commits/efischer/develop/var/spack/repos/builtin/packages/py-proj/package.py>`_
+of ``py-proj/package.py`` is instructive here:
+
+#. On `April 1 <https://github.com/citibeth/spack/commit/44a1d6a96706affe6ef0a11c3a780b91d21d105a>`_, an initial bugfix was identified for the PyProj project
+ and a pull request submitted to PyProj. Because the upstream
+ authors had not yet fixed the bug, the ``py-proj`` Spack package
+ downloads from a forked repository, set up by the package's author.
+ A non-numeric version number is used to make it easy to upgrade the
+ package without recomputing checksums; however, this is an
+ untrusted download method and should not be distributed. The
+ package author has now become, temporarily, a maintainer of the
+ upstream project:
+
+ .. code-block:: python
+
+ # We need the benefits of this PR
+ # https://github.com/jswhit/pyproj/pull/54
+ version('citibeth-latlong2',
+ git='https://github.com/citibeth/pyproj.git',
+ branch='latlong2')
+
+
+#. By May 14, the upstream project had accepted a pull request with
+ the required bugfix. At this point, the forked repository was
+ deleted. However, the upstream project still had not released a
+ new version with a bugfix. Therefore, a Spack-only release was
+ created by specifying the desired hash in the main project
+ repository. The version number ``@1.9.5.1.1`` was chosen for this
+ "release" because it's a descendent of the officially released
+ version ``@1.9.5.1``. This is a trusted download method, and can
+ be released to the Spack community:
+
+ .. code-block:: python
+
+ # This is not a tagged release of pyproj.
+ # The changes in this "version" fix some bugs, especially with Python3 use.
+ version('1.9.5.1.1', 'd035e4bc704d136db79b43ab371b27d2',
+ url='https://www.github.com/jswhit/pyproj/tarball/0be612cc9f972e38b50a90c946a9b353e2ab140f')
+
+ .. note::
+
+ It would have been simpler to use Spack's Git download method,
+ which is also a trusted download in this case:
+
+ .. code-block:: python
+
+ # This is not a tagged release of pyproj.
+ # The changes in this "version" fix some bugs, especially with Python3 use.
+ version('1.9.5.1.1',
+ git='https://github.com/jswhit/pyproj.git',
+ commit='0be612cc9f972e38b50a90c946a9b353e2ab140f')
+
+ .. note::
+
+ In this case, the upstream project fixed the bug in its
+ repository in a relatively timely manner. If that had not been
+ the case, the numbered version in this step could have been
+ released from the forked repository.
+
+
+#. The author of the Spack package has now become an unofficial
+ release engineer for the upstream project. Depending on the
+ situation, it may be advisable to put ``preferred=True`` on the
+ latest *officially released* version.
+
+#. As of August 31, the upstream project still had not made a new
+ release with the bugfix. In the meantime, Spack-built ``py-proj``
+ provides the bugfix needed by packages depending on it. As long as
+ this works, there is no particular need for the upstream project to
+ make a new official release.
+
+#. If the upstream project releases a new official version with the
+ bugfix, then the unofficial ``version()`` line should be removed
+ from the Spack package.
+
+-------
+Patches
+-------
+
+Spack's source patching mechanism provides another way to fix bugs in
+upstream projects. This has advantages and disadvantages compared to the procedures above.
+
+Advantages:
+
+ 1. It can fix bugs in existing released versions, and (probably)
+ future releases as well.
+
+ 2. It is lightweight, does not require a new fork to be set up.
+
+Disadvantages:
+
+ 1. It is harder to develop and debug a patch, vs. a branch in a
+ repository. The user loses the automation provided by version
+ control systems.
+
+ 2. Although patches of a few lines work OK, large patch files can be
+ hard to create and maintain.
+
diff --git a/lib/spack/spack/directives.py b/lib/spack/spack/directives.py
index 86acd075cd..dda9fb32d8 100644
--- a/lib/spack/spack/directives.py
+++ b/lib/spack/spack/directives.py
@@ -212,7 +212,10 @@ def _depends_on(pkg, spec, when=None, type=None):
@directive(('dependencies', '_deptypes'))
def depends_on(pkg, spec, when=None, type=None):
- """Creates a dict of deps with specs defining when they apply."""
+ """Creates a dict of deps with specs defining when they apply.
+ This directive is to be used inside a Package definition to declare
+ that the package requires other packages to be built first.
+ @see The section "Dependency specs" in the Spack Packaging Guide."""
_depends_on(pkg, spec, when=when, type=type)
diff --git a/lib/spack/spack/package.py b/lib/spack/spack/package.py
index b272cc3eba..a596a363ca 100644
--- a/lib/spack/spack/package.py
+++ b/lib/spack/spack/package.py
@@ -482,8 +482,13 @@ class Package(object):
# TODO: move this out of here and into some URL extrapolation module?
def url_for_version(self, version):
- """
- Returns a URL that you can download a new version of this package from.
+ """Returns a URL from which the specified version of this package
+ may be downloaded.
+
+ version: class Version
+ The version for which a URL is sought.
+
+ See Class Version (version.py)
"""
if not isinstance(version, Version):
version = Version(version)