summaryrefslogtreecommitdiff
path: root/etc
AgeCommit message (Collapse)AuthorFilesLines
2021-07-12Enable/disable bootstrapping and customize store location (#23677)Massimiliano Culpo1-0/+7
* Permit to enable/disable bootstrapping and customize store location This PR adds configuration handles to allow enabling and disabling bootstrapping, and to customize the store location. * Move bootstrap related configuration into its own YAML file * Add a bootstrap command to manage configuration
2021-06-28Hdf5 cmake (#18937)Larry Knox1-1/+1
* Switch hdf5 package from autotools to cmake. * Add variant for building with zlib, default to ON. * Update for format requirements. * Format change. * Fix breakage from last merge from develop. Switch szip to use libaec (unrestricted encryption). Remove 'static' variant: static libs will only be installed when ~shared. * Improve args based on suggestions from pull request. * Update code URL to github.com Add/modify 4 depends_on lines to fix running "spack graph --deptype=link hdf5". * Remove trailing whitespace. * Remove dependencies added solely to make "spack greph --type=link" work. * Add new version HDF5 1.8.22. * Remove unnecessary java_check. * Fix whitespace for style checks. * Reverted zlib version dependency to 1.1.2:. zlib variant removed. api version default renamed "default". * Remove blank line. * Whitespace corrections. * iRemoved unnecessary 'debug' variant. * Fix typo in version number in conflict for '+szip'. * Set default for tools variant to True. Remove patch functions dependent on 'libtool' file that cmake doesn't produce. * Remove line to set ONLY_SHARED_LIBS to true. Add post_install code to install only one version of tools with shared linkage and original tool names. * Remove trailing white space and import of glob package not used. * Leave BUILD_TESTING set to default which is ON. * Remove post_install code to install only one version of tools because some dependent packages running tests in e4s testing are using h5diff-shared. Keep both tools versions for now. * No longer need to import os.
2021-06-15add irep and lua-lang virtual dependency (#22492)Richarda Butler1-0/+1
This adds a package for `irep`, a tool for reading `lua` input decks from Fortran, C, and C++. `irep` can be built with either `lua` or `luajit`. To address this, we also add a virtual package for lua called `lua-lang`. `luajit` isn't, by default, a drop-in replacement for `lua`, but we add a `+lualinks` variant to it that adds symlinks that make it behave like `lua@5.1`. With this variant enabled, it provides the `lua-lang` virtual. `lua` always provides `lua-lang`. - [x] add `irep` package - [x] add `+lualinks` variant to `lua-luajit` - [x] create `lua-lang` virtual, provided by `lua` and `luajit+lualinks` Co-authored-by: Kayla Richarda Butler <butler59@quartz1148.llnl.gov> Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2021-06-11Fixing provides directive for intel-oneapi-onedal package (#24108)Nikolay Petrov1-0/+1
2021-06-04apply default linux prefix inspections to all module sets (#24151)Greg Becker1-6/+5
2021-05-28Separable module configuration -- without the bugs this time (#23703)Greg Becker3-17/+24
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment). This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention. As part of this change, the module roots configuration moved from the config section to inside each module configuration. Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
2021-05-28Use AWS CloudFront for source mirror (#23978)Todd Gamblin1-1/+1
Spack's source mirror was previously in a plain old S3 bucket. That will still work, but we can do better. This switches to AWS's CloudFront CDN for hosting the mirror. CloudFront is 16x faster (or more) than the old bucket. - [x] change mirror to https://mirror.spack.io
2021-05-27Add fuse virtual dependency, new macfuse package (#23904)Adam J. Stewart2-6/+5
2021-05-26defaults/cray: use modules.yaml from defaults/linux (#23932)eugeneswalker1-0/+21
2021-05-25Add xxd for hsa-rocr-dev build script (#23855)Harmen Stoppels1-4/+5
2021-05-17Revert "Separable module configurations (#22588)" (#23674)Harmen Stoppels4-27/+22
This reverts commit cefbe48c89209dc3df654795644973b1885cdea4.
2021-05-14Separable module configurations (#22588)Greg Becker4-22/+27
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment). This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention. As part of this change, the module roots configuration moved from the `config` section to inside each module configuration. Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec. TODO: - [x] code changes to support multiple module sets - [x] code changes to support modules relative to a view - [x] Tests for multiple module configurations - [x] Tests for modules relative to a view - [x] Backwards compatibility for module roots from config section - [x] Backwards compatibility for default module set without the name specified - [x] Tests for backwards compatibility
2021-03-30Make -j flag less exceptional (#22360)Harmen Stoppels1-5/+7
* Make -j flag less exceptional The -j flag in spack behaves differently from make, ctest, ninja, etc, because it caps the number of jobs to an arbitrary number 16. Spack will behave like other tools if `spack install` uses a reasonable default, and `spack install -j <num>` *overrides* that default. This will be particularly useful for Spack usage outside of a traditional HPC context and for HPC centers that encourage users to compile on login nodes with many cores instead of on compute nodes, which has become increasingly common as individual nodes have more cores. This maintains the existing default value of min(num_cpus, 16). However, as it is right now, Spack does a poor job at determining the number of cpus on linux, since it doesn't take cgroups into account. This is particularly problematic when using distributed builds with slurm. This PR also introduces `spack.util.cpus.cpus_available()` to consolidate knowledge on determining the number of available cores, and improves core detection for linux. This should also improve core detection for Docker/ Kubernetes, which also use cgroups.
2021-03-03zig: add new package at v0.7.1 (#22046)Massimiliano Culpo1-0/+1
2021-02-23Drop compiler variables from spack load (#21699)Harmen Stoppels1-10/+0
Drops: * C_INCLUDE_PATH * CPLUS_INCLUDE_PATH * LIBRARY_PATH * INCLUDE We already decided to use C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, INCLUDE over CPATH here: https://github.com/spack/spack/pull/14749 However, none of these flags apply to Fortran on Linux. So for consistency it seems better to make the user use -I and -L flags by hand or through pkgconfig.
2021-02-09Procedure to deprecate old versions of software (#20767)Adam J. Stewart1-1/+6
* Procedure to deprecate old versions of software * Add documentation * Fix bug in logic * Update tab completion * Deprecate legacy packages * Deprecate old mxnet as well * More explicit docs
2021-02-04config: (darwin only) change prefix of external libuuid (#21480)Laurent Aphecetche1-1/+1
2020-12-30Use system libuuid on macOS (#20608)Adam J. Stewart1-0/+9
2020-12-29Introduce virtual provider uuid (#18322)Michael Kuhn1-0/+1
libuuid is currently contained in util-linux, libuuid and uuid. This change introduces a new virtual provider `uuid` and renames the existing `uuid` package to `ossp-uuid`. util-linux's libuuid is provided in the form of a separate package util-linux-uuid to make sure that packages depending on uuid and util-linux can use a separate uuid implementation, which the concretizer does not allow if libuuid is contained in util-linux.
2020-11-30Fix Mesa GLES conflicts (#20184)Chuck Atkins1-1/+1
2020-11-18spack test (#15702)Greg Becker1-0/+4
Users can add test() methods to their packages to run smoke tests on installations with the new `spack test` command (the old `spack test` is now `spack unit-test`). spack test is environment-aware, so you can `spack install` an environment and then run `spack test run` to run smoke tests on all of its packages. Historical test logs can be perused with `spack test results`. Generic smoke tests for MPI implementations, C, C++, and Fortran compilers as well as specific smoke tests for 18 packages. Inside the test method, individual tests can be run separately (and continue to run best-effort after a test failure) using the `run_test` method. The `run_test` method encapsulates finding test executables, running and checking return codes, checking output, and error handling. This handles the following trickier aspects of testing with direct support in Spack's package API: - [x] Caching source or intermediate build files at build time for use at test time. - [x] Test dependencies, - [x] packages that require a compiler for testing (such as library only packages). See the packaging guide for more details on using Spack testing support. Included is support for package.py files for virtual packages. This does not change the Spack interface, but is a major change in internals. Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov> Co-authored-by: wspear <wjspear@gmail.com> Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-11-17concretizer: add a configuration option to use new or old concretizerTodd Gamblin1-0/+16
- [x] spec.py can call out to the new concretizer - [x] config.yaml now has an option to choose a concretizer (original, clingo)
2020-11-12move sbang to unpadded install tree root (#19640)Greg Becker1-0/+7
Since #11598 sbang has been installed within the install_tree. This doesn’t play nicely with install_tree padding, since sbang can’t do its job if it is installed in a long path (this is the whole point of sbang). This PR changes the padding specification. Instead of $padding inside paths, we now have a separate `padding:` field in the `install_tree` configuration. Previously, the `install_tree` looked like this: ``` /path/to/opt/spack_padding_padding_padding_padding_padding/ bin/ sbang .spack-db/ ... linux-rhel7-x86_64/ ... ``` ``` This PR updates things to look like this: /path/to/opt/ bin/ sbang spack_padding_padding_padding_padding_padding/ .spack-db/ ... linux-rhel7-x86_64/ ... So padding is added at the start of all install prefixes *within* the unpadded root. The database and all installations still go under the padded root. This ensures that `sbang` is in the shorted possible path while also allowing us to make long paths for relocatable binaries.
2020-10-31[NEW] Added amdfftw, amdlibflame and amdscalapack recipes (#19457)vijay kallesh1-4/+5
* [NEW] Added amdfftw, amdlibflame and amdscalapack recipes Updated base fftw, libflame and netlib-scalapack recipes to accommodate the above listed AMD Optimizing CPU Libraries which are a set of numerical routines optimized for AMD platforms. Updated amdblis spack recipe amdblis: 1. updated with amdblis 2.2 release amdfftw: 1. "--enable-single" now work as synonym for "--enable-float" amdlibflame: 1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag Libflame: 1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag 2. Corrected invocation of "enable_or_disable('threads')" Change-Id: I9da0a2c2c4e2075b7fa2776e7cfe6548a2e0b32f * Added amd-toolchain-support as maintainers Added team github account amd-toolchain-support as maintainers for all the recipes owned by AMD Optimizing CPU Libraries (AOCL) team Change-Id: I9a7969bd48fc42cfbb88dd7bd93e0802c6138582 * Incorporated review comments Updated packages.yaml with aocl components Handled Flake8 test failures Change-Id: I0a03f02d8c9f326b2434ec907958c3de3a8e18eb * Readded accidental removal of stream recipe amdfftw: 1. Updated the aocc clang selection as per spack standards fftw: 1. Currently apple-clang section is redundant, already it is handled in the conflict checks. Change-Id: Idef4a3f61717eb81f321e0cd16e7ba9619eac846 * Fix for style and docs/validate (pull_request) test unnumbered format placeholders from {} to {0} Change-Id: If67a3374177ec067573e5504462d257712fafc05 * changed compiler references to Spack's compiler wrapper:spack_cc, spack_cxx, spack_fc Change-Id: I7ae29c978fff16e37773913f14c84df232499763 * Removed 'single' variant from amdfftw recipe Instead of conflict for apple-clang + openmp, handled this senario via below available feature: depends_on('llvm-openmp', when='%apple-clang +openmp') Change-Id: I701b23d83e822a500ca3aaf2b60cc9ace09e13dc * Added relevant info for users who prefers to use single precision Change-Id: I3506e21da428ddef5fb7895b5aaed32c2a061ef6 * Minor changes on fftw, amdfftw and libflame amdfftw: 1. Removed escape symbol to the single quotes 2. Rewording the conflict line from Recommended to Required fftw: 1. Reorded to following recommended sections: versions, variants, dependencies, providers, patches libflame: 1. Added provides entry for 5.1.0 version Change-Id: I21ebff99b6dfde031763154693ecb3f1fa47b476 * Removed single quote from amdfftw docstring to fix style failures Change-Id: Ife939a5a2f5ccbc8879b730c7bebfe2fcfef9332
2020-10-30mesa: Retire older autotools mesa as mesa18 and create current meson mesa ↵Chuck Atkins1-2/+3
(#19528)
2020-10-20Adding AOCC compiler to SPACK community (#19345)GaneshPrasadMA1-1/+1
* Adding AOCC compiler to SPACK community The AOCC compiler system offers a high level of advanced optimizations, multi-threading and processor support that includes global optimization, vectorization, inter-procedural analyses, loop transformations, and code generation. AMD also provides highly optimized libraries, which extract the optimal performance from each x86 processor core when utilized. The AOCC Compiler Suite simplifies and accelerates development and tuning for x86 applications. * Added unit tests for detection and flags for AOCC * Addressed reviewers comments w.r.t version checks and url,checksum related line lengths Co-authored-by: Test User <spack@example.com>
2020-10-10make py-pillow the default for pil (#19251)Andrew Gaspar1-1/+1
py-pillow-simd is not portable to other architectures.
2020-10-01Add yacc provider and add dependency to swig (#19087)Andrew Gaspar1-1/+1
* Add byacc dependency to swig when building an autoconf version * Add yacc provider Removed extra sycl provider default
2020-09-25refactor install_tree to use projections format (#18341)Greg Becker1-5/+4
* refactor install_tree to use projections format * Add update method for config.yaml * add test for config update config
2020-09-03Pillow-SIMD: use as default PIL provider (#18097)Adam J. Stewart1-1/+1
* Pillow-SIMD: use as default PIL provider * Fix concretization of pil * Fix build of older versions of pillow
2020-08-10Update packages.yaml format and support configuration updatesMassimiliano Culpo1-5/+8
The YAML config for paths and modules of external packages has changed: the new format allows a single spec to load multiple modules. Spack will automatically convert from the old format when reading the configs (the updates do not add new essential properties, so this change in Spack is backwards-compatible). With this update, Spack cannot modify existing configs/environments without updating them (e.g. “spack config add” will fail if the configuration is in a format that predates this PR). The user is prompted to do this explicitly and commands are provided. All config scopes can be updated at once. Each environment must be updated one at a time.
2020-07-23Revert "Add libglvnd packages/Add EGL support (#14572)" (#17682)Chuck Atkins1-6/+2
This reverts commit 573489db710c6fd315170a45d6c609db2e30e5e4.
2020-07-13Add libglvnd packages/Add EGL support (#14572)Omar Padron1-2/+6
* add new package: "libglvnd-frontend" * add +glvnd variant to opengl package * add +glvnd variant to mesa package * add +egl variant to paraview package * add libglvnd-frontend entries to default packages config * fix style * add default providers for glvnd virtuals add default providers for glvnd-gl, glvnd-glx, and glvnd-egl * WIP: rough start to external OpenGL documentation * rename libglvnd-frontend package and backend virtual dependencies * update documentation * fix ligvnd-be-* typos * fix libglvnd-fe package class name * fix doc parse error
2020-07-08add public spack mirror (#17077)Peter Scheibel1-0/+2
2020-06-25Separate Apple Clang from LLVM Clang (#17110)Massimiliano Culpo1-1/+5
* Separate Apple Clang from LLVM Clang Apple Clang is a compiler of its own. All places referring to "-apple" suffix have been updated. * Hack to use a dash in 'apple-clang' To be able to use autodoc from Sphinx we need a valid Python name for the module that contains Apple's Clang code. * Updated packages to account for the existence of apple-clang Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com> * Added unit test for XCode related functions Co-authored-by: Gregory Becker <becker33@llnl.gov> Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-05-25hdf: new version, variants and refactoring (#16469)Sergey Kosukhin1-0/+1
* hdf: new version, variants and refactoring. * libc provides rpc * Fix szip-related configure argument. * Update dependent packages.
2020-05-25Fix typo for allow_sgid (#16806)Michael Kuhn1-1/+1
fixes #14425 The config: prefix should be included in the actual option name and makes it impossible to change this option.
2020-05-07Config option to disable setting S_ISGID bit when creating installation ↵iarspider1-0/+4
directory (#14479) * Add config option to disable setting S_ISGID bit when creating installation directory. Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
2020-04-16darwin: cut DYLD_LIBRARY_PATH from default modulesGeoffrey Malcolm Oxberry1-2/+0
This commit removes the DYLD_LIBRARY_PATH variable from the default modules.yaml for darwin. The rationale behind deleting this environment variable is that paths in this environment variable take precedence over the default locations of libraries (usually the install path of the library), which can lead to linking errors in some circumstances. For example, executables intended to link with Apple's system BLAS and LAPACK will instead link to a spack-installed implementation (e.g., OpenBLAS), causing runtime errors. These errors are resolved by instead relying on paths set in DYLD_FALLBACK_LIBRARY_PATH, which is lower in precedence than default locations of libraries.
2020-03-28Hack to select iconv implementation - libiconv vs. libc iconv (#15437)iarspider1-0/+1
(re-do of #15213 due to changes in gnupg recipe)
2020-03-20multiprocessing: allow Spack to run uninterrupted in background (#14682)Greg Becker1-0/+1
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded. This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
2020-03-17Module files won't use CPATH by default, but language specific vars (#14749)Massimiliano Culpo1-1/+5
fixes #11555 Every path in CPATH is equivalent to a -I path to the compiler, while every path in *_INCLUDE_PATH is equivalent to -isystem. The latter avoids the noise due to warnings coming from 3rd party libraries that a project depends on. Added INCLUDE env variable (Intel Fortran, .mod files)
2020-02-27config: Add a new option connect_timeoutMichael Kuhn1-0/+6
connect_timeout can be used to increase the time Spack waits for the server to answer. This can be used to work around slow connections or servers. Fixes #14700
2020-02-19Distributed builds (#13100)Tamara Dahlgren1-1/+1
Fixes #9394 Closes #13217. ## Background Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`. This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.). The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method. Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`). The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`). This PR adds support for distributed (single- and multi-node) parallel builds. The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages. ## Approach ### File System Locks Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks. The runs can be a combination of interactive and batch processes affecting the same file system. Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed. Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes. The resulting file contains the failing spec to facilitate manual debugging. ### Priority Queue Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies. Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix. Consequently, packages can no longer get away with an install method consisting of `pass`, for example. ## Caveats - This still only parallelizes a single-rooted build. Multi-rooted installs (e.g., for environments) are TBD in a future PR. Tasks: - [x] Adjust package lock timeout to correspond to value used in the demo - [x] Adjust database lock timeout to reduce contention on startup of concurrent `spack install <spec>` calls - [x] Replace (test) package's `install: pass` methods with file creation since post-install `sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!` - [x] Resolve remaining existing test failures - [x] Respond to alalazo's initial feedback - [x] Remove `bin/demo-locks.py` - [x] Add new tests to address new coverage issues - [x] Replace built-in package's `def install(..): pass` to "install" something (i.e., only `apple-libunwind`) - [x] Increase code coverage
2020-02-13hipsycl: new package and new 'sycl' virtual package (#14051)Federico Ficarelli1-0/+1
2019-11-14Config option to allow gpg warning suppression (#13743)Greg Becker1-0/+8
Add a configuration option to suppress gpg warnings during binary package verification. This only suppresses warnings: a gpg failure will still fail the install. This allows users who have already explicitly trusted the gpg key they are using to avoid seeing repeated warnings that it is self-signed.
2019-10-23Users can configure use of RPATH or RUNPATH (#9168)Massimiliano Culpo1-0/+5
Add a new entry in `config.yaml`: config: shared_linking: 'rpath' If this variable is set to `rpath` (the default) Spack will set RPATH in ELF binaries. If set to `runpath` it will set RUNPATH. Details: * Spack cc wrapper explicitly adds `--disable-new-dtags` when linking * cc wrapper also strips `--enable-new-dtags` from the compile line when disabling (and vice versa) * We specifically do *not* add any dtags flags on macOS, which uses Mach-O binaries, not ELF, so there's no RUNPATH)
2019-10-05Consistently support pkg-config files in share subdirectory (#12838)Michael Kuhn1-0/+2
While the build environment already takes share/pkgconfig into account, the generated module files etc. only consider lib/pkgconfig and lib64/pkgconfig.
2019-10-02Remove support for generating dotkit files (#11986)Massimiliano Culpo2-2/+0
Dotkit is being used only at a few sites and has been deprecated on new machines. This commit removes all the code that provide support for the generation of dotkit module files. A new validator named "deprecatedProperties" has been added to the jsonschema validators. It permits to prompt a warning message or exit with an error if a property that has been marked as deprecated is encountered. * Removed references to dotkit in the docs * Removed references to dotkit in setup-env-test.sh * Added a unit test for the 'deprecatedProperties' schema validator
2019-09-13Remove CombinatorialSpecSet in favor of environments + stacksScott Wittenburg1-16/+0