summaryrefslogtreecommitdiff
path: root/share
AgeCommit message (Collapse)AuthorFilesLines
2020-11-17concretizer: first working version with pyclingo interfaceTodd Gamblin1-1/+10
- [x] Solver now uses the Python interface to clingo - [x] can extract unsatisfiable cores from problems when things go wrong - [x] use Python callbacks for versions instead of choice rules (this may ultimately hurt performance)
2020-11-16pipelines: support testing PRs from forks (#19248)Scott Wittenburg2-18/+24
This change makes improvements to the `spack ci rebuild` command which supports running gitlab pipelines on PRs from forks. Much of this has to do with making sure we can run without the secrets previously required for running gitlab pipelines (e.g signing key, aws credentials, etc). Specific improvements in this PR: Check if spack has precisely one signing key, and use that information as an additional constraint on whether or not we should attempt to sign the binary package we create. Also, if spack does not have at least one public key, add the install option "--no-check-signature" If we are running a pipeline without any profile or environment variables allowing us to push to S3, the pipeline could still successfully create a buildcache in the artifacts and move on. So just print a message and move on if pushing either the buildcache entry or cdash id file to the remote mirror fails. When we attempt to generate a pacakge or gpg key index on an S3 mirror, and there is nothing to index, just print a warning and exit gracefully rather than throw an exception. Support the use of PR-specific mirrors for temporary binary pkg storage. This will allow quality-of-life improvement for developers, providing a place to store binaries over the lifetime of a PR, so that they must only wait for packages to rebuild from source when they push a new commit that causes it to be necessary. Replace two-pass install with a single pass and the new option: --require-full-hash-match. Doing this also removes the need to save a copy of the spack.yaml to be copied over the one spack rewrites in between the two spack install passes. Work around a mirror configuration issue caused by using spack.util.executable to do the package installation. * Update pipeline trigger jobs for PRs from forks Moving to PRs from forks relies on external synchronization script pushing special branch names. Also secrets will only live on the spack mirror project, and must be propagated to the E4S project via variables on the trigger jobs. When this change is merged, pipelines will not run until we update the "Custom CI configuration path" in the Gitlab CI Settings, as the name of the file has changed to better reflect its purpose. * Arg to MirrorCollection is used exclusively, so add main remote mirror to it * Compute full hash less frequently * Add tests covering index generation error handling code
2020-11-09commands: add `spack tutorial` command (#19808)Todd Gamblin1-1/+5
Added a command to set up Spack for our tutorial at https://spack-tutorial.readthedocs.io. The command does some common operations we need first-time users to do. Specifically: - checks out a particular branch of Spack - deletes spurious configuration in `~/.spack` that might be left over from prior parts of the tutorial - adds a mirror and trusts its public key
2020-10-30Binary caching: use full hashes (#19209)Scott Wittenburg1-1/+1
* "spack install" now has a "--require-full-hash-match" option, which forces Spack to skip an available binary package when the full hash doesn't match. Normally only a DAG-hash match is required, which ensures equivalent Specs, but does not account for changing logic inside the associated package. * Add a local binary cache index which tracks specs that have a binary install available in a remote binary cache. It is updated with "spack buildcache list" or for a given spec when a binary package is retrieved for that Spec.
2020-10-27sbang: use bashcov in sbang on LinuxTodd Gamblin1-2/+7
2020-10-23csh: don't require SPACK_ROOT for sourcing setup-env.csh (#18225)Todd Gamblin5-37/+163
Don't require SPACK_ROOT for sourcing setup-env.csh and make output more consistent
2020-10-21shell support: make `which spack` output intelligible (#19256)Todd Gamblin1-6/+17
Zsh and newer versions of bash have a builtin `which` function that will show you if a command is actually an alias or a function. For functions, the entire function is printed, and our `spack()` function is quite long. Instead of printing out all that, make the `spack()` function a wrapper around `_spack_shell_wrapper()`, and include some no-ops in the definition so that users can see where it was created and where Spack is installed. Here's what the new output looks like in zsh: ```console $ which spack spack () { : this is a shell function from: /Users/gamblin2/src/spack/share/spack/setup-env.sh : the real spack script is here: /Users/gamblin2/src/spack/bin/spack _spack "$@" return $? } ``` Note that `:` is a no-op in Bourne shell; it just discards anything after it on the line. We use it here to embed paths in the function definition (as comments are stripped).
2020-10-18Add testing option to dev-build command (#17293)elsagermann1-1/+1
* ADD: testing to dev-build command * RM: mutally exclusive group for testing in parser * FIX: test option to subparser and not testing * ADD: spack-completion.bash * RM: local devbuildcosmo cmd * FIX: bad merge --drop-in -b --before options forgotten * FIX: --test place in spack-completion.bash * FIX: typo * FIX: blank line removing * FIX: trailing white space Co-authored-by: Elsa Germann <egermann@tsa-ln002.cm.cluster>
2020-10-15Environments: specify packages for developer builds (#15256)Greg Becker1-1/+19
* allow environments to specify dev-build packages * spack develop and spack undevelop commands * never pull dev-build packges from bincache * reinstall dev_specs when code has changed; reinstall dependents too * preserve dev info paths and versions in concretization as special variant * move install overwrite transaction into installer * move dev-build argument handling to package.do_install now that specs are dev-aware, package.do_install can add necessary args (keep_stage=True, use_cache=False) to dev builds. This simplifies driving logic in cmd and env._install * allow 'any' as wildcard for variants * spec: allow anonymous dependencies raise an error when constraining by or normalizing an anonymous dep refactor concretize_develop to remove dev_build variant refactor tests to check for ^dev_path=any instead of +dev_build * fix variant class hierarchy
2020-10-05Revert binary distribution cache manager (#19158)Scott Wittenburg1-1/+1
This reverts #18359 and follow-on PRs intended to address issues with #18359 because that PR changes the hash of all specs. A future PR will reintroduce the changes. * Revert "Fix location in spec.yaml where we look for full_hash (#19132)" * Revert "Fix fetch of spec.yaml files from buildcache (#19101)" * Revert "Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager"
2020-10-02Update buildcache key index when we update the package index (#19117)Scott Wittenburg1-1/+1
This changes makes sure that when we run the pipeline job that updates the buildcache package index on the remote mirror, we also update the key index. The public keys corresponding to the signing keys used to sign the package was pushed to the mirror as a part of creating the buildcache index, so this is just ensuring those keys are reflected in the key index. Also, this change makes sure the "spack buildcache update-index" job runs even when there may have been pipeline failures, since we would like the index always to reflect the true state of the mirror.
2020-09-30Merge pull request #18359 from ↵Scott Wittenburg1-1/+1
scottwittenburg/add-binary-distribution-cache-manager Add binary distribution cache manager
2020-09-25Streamline key management for build caches (#17792)Omar Padron1-1/+10
* Rework spack.util.web.list_url() list_url() now accepts an optional recursive argument (default: False) for controlling whether to only return files within the prefix url or to return all files whose path starts with the prefix url. Allows for the most effecient implementation for the given prefix url scheme. For example, only recursive queries are supported for S3 prefixes, so the returned list is trimmed down if recursive == False, but the native search is returned as-is when recursive == True. Suitable implementations for each case are also used for file system URLs. * Switch to using an explicit index for public keys Switches to maintaining a build cache's keys under build_cache/_pgp. Within this directory is an index.json file listing all the available keys and a <fingerprint>.pub file for each such key. - Adds spack.binary_distribution.generate_key_index() - (re)generates a build cache's key index - Modifies spack.binary_distribution.build_tarball() - if tarball is signed, automatically pushes the key used for signing along with the tarball - if regenerate_index == True, automatically (re)generates the build cache's key index along with the build cache's package index; as in spack.binary_distribution.generate_key_index() - Modifies spack.binary_distribution.get_keys() - a build cache's key index is now used instead of programmatic listing - Adds spack.binary_distribution.push_keys() - publishes keys from Spack's keyring to a given list of mirrors - Adds new spack subcommand: spack gpg publish - publishes keys from Spack's keyring to a given list of mirrors - Modifies spack.util.gpg.Gpg.signing_keys() - Accepts optional positional arguments for filtering the set of keys returned - Adds spack.util.gpg.Gpg.public_keys() - As spack.util.gpg.Gpg.signing_keys(), except public keys are returned - Modifies spack.util.gpg.Gpg.export_keys() - Fixes an issue where GnuPG would prompt for user input if trying to overwrite an existing file - Modifies spack.util.gpg.Gpg.untrust() - Fixes an issue where GnuPG would fail for input that were not key fingerprints - Modifies spack.util.web.url_exists() - Fixes an issue where url_exists() would throw instead of returning False * rework gpg module/fix error with very long GNUPGHOME dir * add a shim for functools.cached_property * handle permission denied error in gpg util * fix tests/make gpgconf optional if no socket dir is available
2020-09-23OLCF Ascent gitlab ci trigger: pass SPACK_REF (#18875)eugeneswalker1-0/+3
2020-09-18trigger ascent e4s pipeline on merge to spack develop (#18655)Shahzeb Siddiqui1-0/+6
* trigger ascent e4s pipeline on merge to spack develop * change pipeline name ecpcitest/e4s is the pipeline that will be triggered for merge on develop its the E4S use-case.
2020-09-14Make sure each develop pipeline tests associated commitScott Wittenburg1-1/+1
2020-09-14Provide your own script, before_script, and after_scriptScott Wittenburg1-1/+1
2020-09-10Bugfix for fish support: overly zealous arg matching (#18528)Johannes Blaschke1-18/+59
* bugfix for issue 18369 * fix typo Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com> Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-09-08commands: update help for `spack install --yes-to-all` (#18367)Richarda Butler1-1/+1
`spack install --yes-to-all` doesn't actually make the build non-interactive, but that is why people typically use it. This documents that you must also specify `--no-checksum` for a fully non-interactive build.
2020-09-05Make spack environment configurations writable from spack external and spack ↵Robert Blake1-1/+1
compiler find (#18165) * spack config: default modification scope can be an environment The previous model was that environments are the highest priority config scope for config reading operations, but were not considered for config writing operations. Now, the active environment is the highest priority config scope for both reading and writing operations. Now spack config add, spack external find and spack compiler set environment configuration in the environment by default if an environment is active. This is a change in default behavior for these routines, but better matches the mental model for an environment taking precedence over the user's default config file. * add scope argument to 'spack external find' to choose non-default scope * Increase testing for config modifications on environments Co-authored-by: Gregory Becker <becker33@llnl.gov>
2020-09-04Rely on E4S project variable for SPACK_REPOScott Wittenburg1-2/+0
2020-09-02Add new RubyPackage build system base class (#18199)Adam J. Stewart1-1/+5
* Add new RubyPackage build system base class * Ruby: add spack external find support * Add build tests for RubyPackage
2020-08-10Simplify the detection protocol for packagesMassimiliano Culpo1-1/+5
Packages can implement “detect_version” to support detection of external instances of a package. This is generally easier than implementing “determine_spec_details”. The API for determine_version is similar: for example you can return “None” to indicate that an executable is not an instance of a package. Users may implement a “determine_variants” method for a package. When doing external detection, executables are grouped by version and each group results in a single invocation of “determine_variants” for the associated spec. The method returns a string specifying the variants for the package. The method may additionally return a dictionary representing extra attributes for the package. These will be stored in the spec yaml and can be retrieved from self.spec.extra_attributes The Spack GCC package has been updated with an implementation of “determine_variants” which adds the following extra attributes to the package: c, cxx, fortran
2020-08-10Update packages.yaml format and support configuration updatesMassimiliano Culpo2-14/+56
The YAML config for paths and modules of external packages has changed: the new format allows a single spec to load multiple modules. Spack will automatically convert from the old format when reading the configs (the updates do not add new essential properties, so this change in Spack is backwards-compatible). With this update, Spack cannot modify existing configs/environments without updating them (e.g. “spack config add” will fail if the configuration is in a format that predates this PR). The user is prompted to do this explicitly and commands are provided. All config scopes can be updated at once. Each environment must be updated one at a time.
2020-07-31Move Python 2.6 unit tests to Github Actions (#17279)Massimiliano Culpo1-22/+0
* Run Python2.6 unit tests on Github Actions * Skip url tests on Python 2.6 to reduce waiting times * Skip foreground background tests on Python 2.6 to reduce waiting times * Removed references to Travis in the documentation * Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
2020-07-27tutorial: Add boto3 installation to setup script (#17722)Todd Gamblin1-0/+6
2020-07-27add tutorial setup script to share/spack (#17705)Greg Becker1-0/+123
* add tutorial setup script to share/spack * Add check for Ubuntu 18, fix xvda check, fix apt-get errors - now works on t2.micro, t2.small, and m instances - apt-get needs retries around it to work Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-07-23add tutorial public key to share/spack/keys dir (#17684)Greg Becker1-0/+38
2020-07-16Make the largest layer of the docker image cacheable (#17553)Harmen Stoppels4-56/+56
2020-07-15spack containerize: added --fail-fast argument to containerize install. (#17533)Paul2-2/+2
2020-07-08Buildcache: bindist test without invoking spack compiler wrappers. (#15687)Patrick Gartung1-0/+22
* Buildcache: * Try mocking an install of quux, corge and garply using prebuilt binaries * Put patchelf install after ccache restore * Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS * Remove mirror at end of bindist test * Add patchelf to Ubuntu build env * Revert mock patchelf package to allow other tests to run. * Remove depends_on('patchelf', type='build') relying instead on * Test fixture to ensure patchelf is available. * Call g++ command to build libraries directly during test build * Flake8 * Install patchelf in before_install stage using apt unless on Trusty where a build is done. * Add some symbolic links between packages * Flake8 * Flake8: * Update mock packages to write their own source files * Create the stage because spec search does not create it any longer * updates after change of list command arguments * cleanup after merge * flake8
2020-07-08spack create: ask how many to download (#17373)Adam J. Stewart1-1/+1
2020-07-06bugfix: no infinite recursion in setup-env.sh on CrayTodd Gamblin3-0/+29
On Cray platforms, we rely heavily on the module system to figure out what targets, compilers, etc. are available. This unfortunately means that we shell out to the `module` command as part of platform initialization. Because we run subcommands in a shell, we can get infinite recursion if `setup-env.sh` and friends are in some init script like `.bashrc`. This fixes the infinite loop by adding guards around `setup-env.sh`, `setup-env.csh`, and `setup-env.fish`, to prevent recursive initializations of Spack. This is safe because Spack never shells out to itself, so we do not need it to be initialized in subshells. - [x] add recursion guard around `setup-env.sh` - [x] add recursion guard around `setup-env.csh` - [x] add recursion guard around `setup-env.fish`
2020-07-01Moved flake8, shell and documentation tests to Github Action (#17328)Massimiliano Culpo3-14/+10
* Move flake8 tests on Github Actions * Move shell test to Github Actions * Moved documentation build to Github Action * Don't run coverage on Python 2.6 Since we get connection errors consistently on Travis when trying to upload coverage results for Python 2.6, avoid computing coverage entirely to speed-up tests.
2020-06-30Activate environment in container file (#17316)Glenn Johnson2-2/+4
* Activate environment in container file This PR will ensure that the container recipes will build the spack environment by first activating the environment. * Deactivate environment before environment collection For Singularity, the environment must be deactivated before running the command to collect the environment variables. This is because the environment collection uses `spack env activate`.
2020-06-30Add fish shell support (#9279)Johannes Blaschke4-4/+1128
* share/spack/setup-env.fish file to setup environment in fish shell * setup-env.fish testing script * Update share/spack/setup-env.fish Co-Authored-By: Elsa Gonsiorowski, PhD <gonsie@me.com> * Update share/spack/qa/setup-env-test.fish Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com> * updates completions using `spack commands --update-completion` * added stderr-nocaret warning * added fish shell tests to CI system Co-authored-by: becker33 <becker33@llnl.gov> Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com> Co-authored-by: Elsa Gonsiorowski, PhD <gonsie@me.com>
2020-06-29remove three commands that have been deprecated since v0.13.0 (#17291)Greg Becker1-28/+1
* remove three commands that have been deprecated since v0.13.0
2020-06-26Use json for buildcache index (#15002)Scott Wittenburg1-20/+3
* Start moving toward a json buildcache index * Add spec and database index schemas * Add a schema for buildcache spec.yaml files * Provide a mode for database class to generate buildcache index * Update db and ci tests to validate object w/ new schema * Remove unused temporary upload-s3 command * Use database class to generate buildcache index * Do not generate index with each buildcache creation * Make buildcache index mode into a couple of constructor args to Database class * Use keyword args for _createtarball * Parse new json index when we get specs from buildcache Now that only one index file per mirror needs to be fetched in order to have all the concrete specs for binaries available on the mirror, we can just fetch and refresh the cached specs every time instead of needing to use the '-f' flag to force re-reading.
2020-06-25add workaround for gitlab ci needs limit (#17219)Omar Padron1-1/+1
* add workaround for gitlab ci needs limit * fix style/address review comments * convert filter obj to list * update command completion * remove dict comprehension * add workaround tests * fix sorting issue between disparate types * add indeces to format
2020-06-25spack config: new subcommands add/remove (#13920)Greg Becker1-1/+28
spack config add <value>: add nested value value to the configuration scope specified spack config remove/rm: remove specified configuration from the relevant scope
2020-06-24features: Add install failure tracking removal through `spack clean` (#15314)Tamara Dahlgren1-1/+1
* Add ability to force removal of install failure tracking data through spack clean * Add clean failures option to packaging guide
2020-06-23Added support for --fail-fast install option to terminate on first failureTamara Dahlgren1-1/+1
2020-06-23Added unit tests to Github Actions (#16610)Massimiliano Culpo1-12/+3
* Added unit tests to Github Actions * Set user e-mail and name for git tests to succeed * Simplify setup.sh logic * Replicate Travis script on Github Actions * Update flags since '.' is not allowed * Added badge, simplified workflow * Remove pinning of coverage * Remove unit tests run on Github Actions from Travis
2020-06-22Pre ci optimization (#16372)Omar Padron1-1/+1
* add initial optimization script * integrate optimization in spack ci * make optimization opt-in * fix import error * flake8 fixes * update command completion * work around vermin errors * fix sphynx errors
2020-06-18Explicitly install setuptools in docker images (#17143)Omar Padron4-0/+4
2020-06-16fix docker image entrypoints (#17105)Omar Padron7-436/+269
Also removes extraneous prompt and ssh handling logic.
2020-06-05commands: use a single ThreadPool for `spack versions` (#16749)Massimiliano Culpo1-1/+1
This fixes a fork bomb in `spack versions`. Recursive generation of pools to scrape URLs in `_spider` was creating large numbers of processes. Instead of recursively creating process pools, we now use a single `ThreadPool` with a concurrency limit. More on the issue: having ~10 users running at the same time spack versions on front-end nodes caused kernel lockup due to the high number of sockets opened (sys-admin reports ~210k distributed over 3 nodes). Users were internal, so they had ulimit -n set to ~70k. The forking behavior could be observed by just running: $ spack versions boost and checking the number of processes spawned. Number of processes per se was not the issue, but each one of them opens a socket which can stress `iptables`. In the original issue the kernel watchdog was reporting: Message from syslogd@login03 at May 19 12:01:30 ... kernel:Watchdog CPU:110 Hard LOCKUP Message from syslogd@login03 at May 19 12:01:31 ... kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756] Message from syslogd@login03 at May 19 12:01:31 ... kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
2020-06-03Mirrors: add option to exclude packages from "mirror create" (#14154)Peter Scheibel1-1/+1
* add an --exclude-file option to 'spack mirror create' which allows a user to specify a file of specs to exclude when creating a mirror. this is anticipated to be useful especially when using the '--all' option * allow specifying number of versions when mirroring all packages * when mirroring all specs within an environment, include dependencies of root specs * add '--exclude-specs' option to allow user to specify that specs should be excluded on the command line * add test for excluding specs
2020-06-03Feature: add option to create view by copying/relocating files (#16480)Greg Becker1-1/+19
* add subcommand `spack view copy/relocate` * update bash completions * add copy/relocate commands to view tests * allow copied views to be removed
2020-05-14Pipelines: Support DAG scheduling and dynamic child pipelinesScott Wittenburg2-9/+21
This change also adds a code path through the spack ci pipelines infrastructure which supports PR testing on the Spack repository. Gitlab pipelines run as a result of a PR (either creation or pushing to a PR branch) will only verify that the packages in the environment build without error. When the PR branch is merged to develop, another pipeline will run which results in the generated binaries getting pushed to the binary mirror.