summaryrefslogtreecommitdiff
path: root/share
AgeCommit message (Collapse)AuthorFilesLines
2021-06-09Update of Flecsi Spackage (#24106)Robert Pavel1-1/+1
* Update of Flecsi Spackage Update of flecsi spackage to reconcile differences between flecsi@1:1.9 and flecsi@2: for future support purposes * Removing Unnecessary Conditional Removing unused conditional. Initially the plan was to switch based on version in `cmake_args` but this was not necessary as build system variable names remained mostly the same and conflicts prevent the rest. For the most part, if a variant is there it does not need to check against what version of the code is being built. * Updated CI To Reconcile Flecsi Changes Updated CI to target flecsi@1.4.2 which best matches the previous release version and reconciled change in variant name
2021-06-08e4s ci: re-enable veloc builds after recent fixes (#24190)eugeneswalker1-1/+1
2021-05-30CI: E4S: enable full E4S (#24011)eugeneswalker1-78/+295
* e4s ci: enable full e4s * add llvm-amdgpu to list of specs needing an xlarge tagged runner * comment out qt and qwt because of intermittent build failures * remove +rocm specs because rocblas job consistently fails due to infrastructure
2021-05-28adding support for export of private gpg key (#22557)Vanessasaurus1-2/+2
This PR allows users to `--export`, `--export-secret`, or both to export GPG keys from Spack. The docs are updated that include a warning that this usually does not need to be done. This addresses an issue brought up in slack, and also represented in #14721. Signed-off-by: vsoch <vsoch@users.noreply.github.com> Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-28Separable module configuration -- without the bugs this time (#23703)Greg Becker2-5/+5
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment). This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention. As part of this change, the module roots configuration moved from the config section to inside each module configuration. Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec.
2021-05-28Pipelines: reproducible builds (#22887)Scott Wittenburg3-11/+23
### Overview The goal of this PR is to make gitlab pipeline builds (especially build failures) more reproducible outside of the pipeline environment. The two key changes here which aim to improve reproducibility are: 1. Produce a `spack.lock` during pipeline generation which is passed to child jobs via artifacts. This concretized environment is used both by generated child jobs as well as uploaded as an artifact to be used when reproducing the build locally. 2. In the `spack ci rebuild` command, if a spec needs to be rebuilt from source, do this by generating and running an `install.sh` shell script which is then also uploaded as a job artifact to be run during local reproduction. To make it easier to take advantage of improved build reproducibility, this PR also adds a new subcommand, `spack ci reproduce-build`, which, given a url to job artifacts: - fetches and unzips the job artifacts to a local directory - looks for the generated pipeline yaml and parses it to find details about the job to reproduce - attempts to provide a copy of the same version of spack used in the ci build - if the ci build used a docker image, the command prints a `docker run` command you can run to get an interactive shell for reproducing the build #### Some highlights One consequence of this change will be much smaller pipeline yaml files. By encoding the concrete environment in a `spack.lock` and passing to child jobs via artifacts, we will no longer need to encode the concrete root of each spec and write it into the job variables, greatly reducing the size of the generated pipeline yaml. Additionally `spack ci rebuild` output (stdout/stderr) is no longer internally redirected to a log file, so job output will appear directly in the gitlab job trace. With debug logging turned on, this often results in log files getting truncated because they exceed the maximum amount of log output gitlab allows. If this is a problem, you still have the option to `tee` command output to a file in the within the artifacts directory, as now each generated job exposes a `user_data` directory as an artifact, which you can fill with whatever you want in your custom job scripts. There are some changes to be aware of in how pipelines should be set up after this PR: #### Pipeline generation Because the pipeline generation job now writes a `spack.lock` artifact to be consumed by generated downstream jobs, `spack ci generate` takes a new option `--artifacts-root`, inside which it creates a `concrete_env` directory to place the lockfile. This artifacts root directory is also where the `user_data` directory will live, in case you want to generate any custom artifacts. If you do not provide `--artifacts-root`, the default is for it to create a `jobs_scratch_dir` within your `CI_PROJECT_DIR` (a gitlab predefined environment variable) or whatever is your current working directory if that variable isn't set. Here's the diff of the PR testing `.gitlab-ci.yml` taking advantage of the new option: ``` $ git diff develop..pipelines-reproducible-builds share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml diff --git a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml index 579d7b56f3..0247803a30 100644 --- a/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml +++ b/share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml @@ -28,10 +28,11 @@ default: - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME} - spack env activate --without-view . - spack ci generate --check-index-only + --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir" --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml" artifacts: paths: - - "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml" + - "${CI_PROJECT_DIR}/jobs_scratch_dir" tags: ["spack", "public", "medium", "x86_64"] interruptible: true ``` Notice how we replaced the specific pointer to the generated pipeline file with its containing folder, the same folder we passed as `--artifacts-root`. This way anything in that directory (the generated pipeline yaml, as well as the concrete environment directory containing the `spack.lock`) will be uploaded as an artifact and available to the downstream jobs. #### Rebuild jobs Rebuild jobs now must activate the concrete environment created by `spack ci generate` and provided via artifacts. When the pipeline is generated, a directory called `concrete_environment` is created within the artifacts root directory, and this is where the `spack.lock` file is written to be passed to the generated rebuild jobs. The artifacts root directory can be specified using the `--artifacts-root` option to `spack ci generate`, otherwise, it is assumed to be `$CI_PROJECT_DIR`. The directory containing the concrete environment files (`spack.yaml` and `spack.lock`) is then passed to generated child jobs via the `SPACK_CONCRETE_ENV_DIR` variable in the generated pipeline yaml file. When you don't provide custom `script` sections in your `mappings` within the `gitlab-ci` section of your `spack.yaml`, the default behavior of rebuild jobs is now to change into `SPACK_CONCRETE_ENV_DIR` and activate that environment. If you do provide custom rebuild scripts in your `spack.yaml`, be aware those scripts should do the same thing: assume `SPACK_CONCRETE_ENV_DIR` contains the concretized environment to activate. No other changes to existing custom rebuild scripts should be required as a result of this PR. As mentioned above, one key change made in this PR is the generation of the `install.sh` script by the rebuild jobs, as that same script is both run by the CI rebuild job as well as exported as an artifact to aid in subsequent attempts to reproduce the build outside of CI. The generated `install.sh` script contains only a single `spack install` command with arguments computed by `spack ci rebuild`. If the install fails, the job trace in gitlab will contain instructions on how to reproduce the build locally: ``` To reproduce this build locally, run: spack ci reproduce-build https://gitlab.next.spack.io/api/v4/projects/7/jobs/240607/artifacts [--working-dir <dir>] If this project does not have public pipelines, you will need to first: export GITLAB_PRIVATE_TOKEN=<generated_token> ... then follow the printed instructions. ``` When run locally, the `spack ci reproduce-build` command shown above will download and process the job artifacts from gitlab, then print out instructions you can copy-paste to run a local reproducer of the CI job. This PR includes a few other changes to the way pipelines work, see the documentation on pipelines for more details. This PR erelies on ~- [ ] #23194 to be able to refer to uninstalled specs by DAG hash~ EDIT: that is going to take longer to come to fruition, so for now, we will continue to install specs represented by a concrete `spec.yaml` file on disk. - [x] #22657 to support install a single spec already present in the active, concrete environment
2021-05-25adding json export for spack blame (#23417)Vanessasaurus1-1/+1
I would like to be able to export (and save and then load programatically) spack blame metadata, so this commit adds a spack blame --json argument, along with developer docs for it Signed-off-by: vsoch <vsoch@users.noreply.github.com> Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-25first set of work to allow for saving local results with spack monitor (#23804)Vanessasaurus1-2/+2
This work will come in two phases. The first here is to allow saving of a local result with spack monitor, and the second will add a spack monitor command so the user can do spack monitor upload. Signed-off-by: vsoch <vsoch@users.noreply.github.com> Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2021-05-19adding support to tag a buildvsoch1-2/+2
This will be useful to run multiple build experiments and organize by name Signed-off-by: vsoch <vsoch@users.noreply.github.com>
2021-05-17Revert "Separable module configurations (#22588)" (#23674)Harmen Stoppels1-3/+3
This reverts commit cefbe48c89209dc3df654795644973b1885cdea4.
2021-05-14Separable module configurations (#22588)Greg Becker1-3/+3
Currently, module configurations are inconsistent because modulefiles are generated with the configs for the active environment, but are shared among all environments (and spack outside any environment). This PR fixes that by allowing Spack environments (or other spack config scopes) to define additional sets of modules to generate. Each set of modules can enable either lmod or tcl modules, and contains all of the previously available module configuration. The user defines the name of each module set -- the set configured in Spack by default is named "default", and is the one returned by module manipulation commands in the absence of user intervention. As part of this change, the module roots configuration moved from the `config` section to inside each module configuration. Additionally, it adds a feature that the modulefiles for an environment can be configured to be relative to an environment view rather than the underlying prefix. This will not be enabled by default, as it should only be enabled within an environment and for non-default views constructed with separate projections per-spec. TODO: - [x] code changes to support multiple module sets - [x] code changes to support modules relative to a view - [x] Tests for multiple module configurations - [x] Tests for modules relative to a view - [x] Backwards compatibility for module roots from config section - [x] Backwards compatibility for default module set without the name specified - [x] Tests for backwards compatibility
2021-05-07install cmd: --no-add in an env installs w/out concretize and addScott Wittenburg1-1/+1
In an active concretize environment, support installing one or more cli specs only if they are already present in the environment. The `--no-add` option is the default for root specs, but optional for dependency specs. I.e. if you `spack install <depspec>` in an environment, the dependency-only spec `depspec` will be added as a root of the environment before being installed. In addition, `spack install --no-add <spec>` fails if it does not find an unambiguous match for `spec`.
2021-04-27Dyninst: add elfutils versioning (#19648)Tim Haines1-1/+1
2021-04-26Dyninst: Add dependencies for v11.0.0 (#23121)Tim Haines1-1/+1
Also update the mpileaks unit test to avoid a conflict on CentOS 6 where Dyninst >=11.0.0 no longer builds due to a compiler version conflict.
2021-04-26ci: Remove leftover duplicate gitlab yaml (#23248)Chuck Atkins1-125/+0
2021-04-26ci: Generalize the GitLab CI pipeline yaml (#23225)Chuck Atkins3-34/+224
* ci: Generalize the GitLab CI pipeline yaml * ci: Rename cloud_e4s_pipelines to the more general cloud_pipelines
2021-04-15Merge pull request #21930 from vsoch/add/spack-monitorVanessasaurus1-3/+25
This provides initial support for [spack monitor](https://github.com/spack/spack-monitor), a web application that stores information and analysis about Spack installations. Spack can now contact a monitor server and upload analysis -- even after a build is already done. Specifically, this adds: - [x] monitor options for `spack install` - [x] `spack analyze` command - [x] hook architecture for analyzers - [x] separate build logs (in addition to the existing combined log) - [x] docs for spack analyze - [x] reworked developer docs, with hook docs - [x] analyzers for: - [x] config args - [x] environment variables - [x] installed files - [x] libabigail There is a lot more information in the docs contained in this PR, so consult those for full details on this feature. Additional tests will be added in a future PR.
2021-04-14update tutorial public keyGregory Becker1-35/+26
2021-03-29Add "spack [cd|location] --source-dir" (#22321)Harmen Stoppels1-2/+2
2021-03-19CI: drastically reduce the number of tests for package only PRs (#22410)Massimiliano Culpo1-1/+1
PRs that change only package recipes will only run tests under "package_sanity.py" and without coverage. This should result in a huge drop the cpu-time spent in CI for most PRs.
2021-03-18Tab to spaces (#22362)Harmen Stoppels2-6/+6
2021-03-16Speed-up CI by reorganizing tests (#22247)Massimiliano Culpo2-1/+10
* unit tests: mark slow tests as "maybeslow" This commit also removes the "network" marker and marks every "network" test as "maybeslow". Tests marked as db are maintained, but they're not slow anymore. * GA: require style tests to pass before running unit-tests * GA: make MacOS unit tests fail fast * GA: move all unit tests into the same workflow, run style tests as a prerequisite All the unit tests have been moved into the same workflow so that a single run of the dorny/paths-filter action can be used to ask for coverage based on the files that have been changed in a PR. The basic idea is that for PRs that introduce only changes to packages coverage is not necessary, this resulting in a faster execution of the tests. Also, for package only PRs slow unit tests are skipped. Finally, MacOS and linux unit tests are now conditional on style tests passing meaning that e.g. we won't waste a MacOS worker if we know that the PR has flake8 issues. * Addressed review comments * Skipping slow tests on MacOS for package only recipes * QA: make tests on changes correct before merging
2021-03-15Expand relative dev paths in environment files (#22045)Harmen Stoppels1-1/+1
* Rewrite relative dev_spec paths internally to absolute paths in case of relocation of the environment file * Test relative paths for dev_path in environments * Add a --keep-relative flag to spack env create This ensures that relative paths of develop paths are not expanded to absolute paths when initializing the environment in a different location from the spack.yaml init file.
2021-03-15Propagate --test= for environments (#22040)Harmen Stoppels1-1/+1
* Propagate --test= for environments * Improve help comment for spack concretize --test flag * Add tests for --test with environments
2021-03-13adding spack -c to set one off config arguments (#22251)Vanessasaurus1-1/+1
This pull request will add the ability for a user to add a configuration argument on the fly, on the command line, e.g.,: ```bash $ spack -c config:install_tree:root:/path/to/config.yaml -c packages:all:compiler:[gcc] list --help ``` The above command doesn't do anything (I'm just getting help for list) but you can imagine having another root of packages, and updating it on the fly for a command (something I'd like to do in the near future!) I've moved the logic for config_add that used to be in spack/cmd/config.py into spack/config.py proper, and now both the main.py (where spack commands live) and spack/cmd/config.py use these functions. I only needed spack config add, so I didn't move the others. We can move the others if there are also needed in multiple places.
2021-03-10fix setup-env.sh on older linux zsh (#21721)Danny McClanahan2-2/+2
2021-03-07spack python: add --path option (#22006)Todd Gamblin1-1/+1
This adds a `--path` option to `spack python` that shows the `python` interpreter that Spack is using. e.g.: ```console $ spack python --path /Users/gamblin2/src/spack/var/spack/environments/default/.spack-env/view/bin/python ``` This is useful for debugging, and we can ask users to run it to understand what python Spack is picking up via preferences in `bin/spack` and via the `SPACK_PYTHON` environment variable introduced in #21222.
2021-03-07add `spack test list --all` (#22032)Todd Gamblin1-1/+1
`spack test list` will show you which *installed* packages can be tested but it won't show you which packages have tests. - [x] add `spack test list --all` to show which packages have test methods - [x] update `has_test_method()` to handle package instances *and* package classes.
2021-03-03Bootstrap clingo from sources (#21446)Massimiliano Culpo1-2/+2
* Allow the bootstrapping of clingo from sources Allow python builds with system python as external for MacOS * Ensure consistent configuration when bootstrapping clingo This commit uses context managers to ensure we can bootstrap clingo using a consistent configuration regardless of the use case being managed. * Github actions: test clingo with bootstrapping from sources * Add command to inspect and clean the bootstrap store Prevent users to set the install tree root to the bootstrap store * clingo: documented how to bootstrap from sources Co-authored-by: Gregory Becker <becker33@llnl.gov>
2021-02-27Temporarily reduce pr stack size (#21998)Scott Wittenburg2-73/+73
Gitlab pipelines fixes * add arch tag to avoid picking up UO power9 runners * temporarily reduce PR workload
2021-02-24Config prefer upstream (#21487)Paul Ferrell1-1/+5
This allows for quickly configuring a spack install/env to use upstream packages by default. This is particularly important when upstreaming from a set of officially supported spack installs on a production cluster. By configuring such that package preferences match the upstream, you ensure maximal reuse of existing package installations.
2021-02-23Gitlab fix pr workflow (#21786)Scott Wittenburg2-5/+4
Fixes for gitlab pipelines * Remove accidentally retained testing branch name * Generate pipeline w/out debug mode * Make jobs interruptible for auto-cancel pending * Work around concretization conflicts
2021-02-23Drop compiler variables from spack load (#21699)Harmen Stoppels1-1/+0
Drops: * C_INCLUDE_PATH * CPLUS_INCLUDE_PATH * LIBRARY_PATH * INCLUDE We already decided to use C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, INCLUDE over CPATH here: https://github.com/spack/spack/pull/14749 However, none of these flags apply to Fortran on Linux. So for consistency it seems better to make the user use -I and -L flags by hand or through pkgconfig.
2021-02-18Pipelines: Move PR testing stacks (currently only E4S) into spack (#21714)Scott Wittenburg3-24/+212
2021-02-18Fixed conditional in match_flag for fish env (#21679)Severin Strobl1-1/+1
An attempt to fix the conditional was made in 5a771bc8ad, yet this broke the conditional completely.
2021-02-16Pipelines: Temporary buildcache storage (#21474)Scott Wittenburg1-1/+5
Before this change, in pipeline environments where runners do not have access to persistent shared file-system storage, the only way to pass buildcaches to dependents in later stages was by using the "enable-artifacts-buildcache" flag in the gitlab-ci section of the spack.yaml. This change supports a second mechanism, named "temporary-storage-url-prefix", which can be provided instead of the "enable-artifacts-buildcache" feature, but the two cannot be used at the same time. If this prefix is provided (only "file://" and "s3://" urls are supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url for a mirror where pipeline jobs will write buildcache entries for use by jobs in subsequent stages. If this prefix is provided, a cleanup job will be generated to run after all the rebuild jobs have finished that will delete the contents of the temporary mirror. To support this behavior a new mirror sub-command has been added: "spack mirror destroy" which can take either a mirror name or url. This change also fixes a bug in generation of "needs" list for each job. Each jobs "needs" list is supposed to only contain direct dependencies for scheduling purposes, unless "enable-artifacts-buildcache" is specified. Only in that case are the needs lists supposed to contain all transitive dependencies. This changes fixes a bug that caused the needs lists to always contain all transitive dependencies, regardless of whether or not "enable-artifacts-buildcache" was specified.
2021-02-16Add RHEL8 Universal Base Image with platform-python to CI unit tests (#21655)Chuck Atkins1-1/+1
2021-02-16Pipelines: DAG Pruning (#20435)Scott Wittenburg1-2/+6
Pipelines: DAG pruning During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors. By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline. To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument. To speed up the pipeline generation process, pass the --check-index-only argument. This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors. The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily. This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml. Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set. Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
2021-02-12Introduce a SPACK_PYTHON environment variable (#21222)Chuck Atkins3-1/+34
The SPACK_PYTHON environment variable can be set to a python interpreter to be used by the spack command. This allows the spack command itself to use a consistent and separate interpreter from whatever python might be used for package building.
2021-02-09Procedure to deprecate old versions of software (#20767)Adam J. Stewart1-6/+6
* Procedure to deprecate old versions of software * Add documentation * Fix bug in logic * Update tab completion * Deprecate legacy packages * Deprecate old mxnet as well * More explicit docs
2021-02-04spack external find: allow to search by tags (#21407)Massimiliano Culpo1-3/+3
This commit adds an option to the `external find` command that allows it to search by tags. In this way group of executables with common purposes can be grouped under a single name and a simple command can be used to detect all of them. As an example introduce the 'build-tools' tag to search for common development tools on a system
2021-01-27spack setup: remove the command for v0.17.0 (#20277)Adam J. Stewart1-10/+1
spack setup was deprecated in 0.16 and will be removed in 0.17 Follow-up to #18240
2021-01-08Remove ascent gitlab trigger (#20755)Jamie Finney1-9/+0
Remove the ORNL Ascent gitlab trigger CI will now be done internally via periodic builds.
2021-01-05spack python: allow use of IPython (#20329)Vanessasaurus1-1/+1
This adds a -i option to "spack python" which allows use of the IPython interpreter; it can be used with "spack python -i ipython". This assumes it is available in the Python instance used to run Spack (i.e. that you can "import IPython").
2021-01-02copyrights: update all files with license headers for 2021Todd Gamblin23-23/+23
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files - [x] update all licensed files to say 2013-2021 using `spack license update-copyright-year` - [x] appease mypy with some additions to package.py that needed for oneapi.py
2021-01-02commands: add `spack license update-copyright-year`Todd Gamblin1-3/+7
This adds a new subcommand to `spack license` that automatically updates the copyright year in files that should have a license header. - [x] add `spack license update-copyright-year` command - [x] add test
2020-12-29PythonPackage: url -> pypi (#20610)Adam J. Stewart1-0/+1
* Convert all `url` attributes in `PythonPackage`s to `pypi` attributes * add `pypi =` to flake8 exceptions
2020-12-22add mypy to style checks; rename `spack flake8` to `spack style` (#20384)Tom Scogland3-4/+17
I lost my mind a bit after getting the completion stuff working and decided to get Mypy working for spack as well. This adds a `.mypy.ini` that checks all of the spack and llnl modules, though not yet packages, and fixes all of the identified missing types and type issues for the spack library. In addition to these changes, this includes: * rename `spack flake8` to `spack style` Aliases flake8 to style, and just runs flake8 as before, but with a warning. The style command runs both `flake8` and `mypy`, in sequence. Added --no-<tool> options to turn off one or the other, they are on by default. Fixed two issues caught by the tools. * stub typing module for python2.x We don't support typing in Spack for python 2.x. To allow 2.x to support `import typing` and `from typing import ...` without a try/except dance to support old versions, this adds a stub module *just* for python 2.x. Doing it this way means we can only reliably use all type hints in python3.7+, and mypi.ini has been updated to reflect that. * add non-default black check to spack style This is a first step to requiring black. It doesn't enforce it by default, but it will check it if requested. Currently enforcing the line length of 79 since that's what flake8 requires, but it's a bit odd for a black formatted project to be quite that narrow. All settings are in the style command since spack has no pyproject.toml and I don't want to add one until more discussion happens. Also re-format `style.py` since it no longer passed the black style check with the new length. * use style check in github action Update the style and docs action to use `spack style`, adding in mypy and black to the action even if it isn't running black right now.
2020-12-22Refactor flake8 handling and tool compatibility (#20376)Tom Scogland2-1/+147
This PR does three related things to try to improve developer tooling quality of life: 1. Adds new options to `.flake8` so it applies the rules of both `.flake8` and `.flake_package` based on paths in the repository. 2. Adds a re-factoring of the `spack flake8` logic into a flake8 plugin so using flake8 directly, or through editor or language server integration, only reports errors that `spack flake8` would. 3. Allows star import of `spack.pkgkit` in packages, since this is now the thing that needs to be imported for completion to work correctly in package files, it's nice to be able to do that. I'm sorely tempted to sed over the whole repository and put `from spack.pkgkit import *` in every package, but at least being allowed to do it on a per-package basis helps. As an example of what the result of this is: ``` ~/Workspace/Projects/spack/spack develop* ⇣ ❯ flake8 --format=pylint ./var/spack/repos/builtin/packages/kripke/package.py ./var/spack/repos/builtin/packages/kripke/package.py:6: [F403] 'from spack.pkgkit import *' used; unable to detect undefined names ./var/spack/repos/builtin/packages/kripke/package.py:25: [E501] line too long (88 > 79 characters) ~/Workspace/Projects/spack/spack refactor-flake8* 1 ❯ flake8 --format=spack ./var/spack/repos/builtin/packages/kripke/package.py ~/Workspace/Projects/spack/spack refactor-flake8* ❯ flake8 ./var/spack/repos/builtin/packages/kripke/package.py ``` * qa/flake8: update .flake8, spack formatter plugin Adds: * Modern flake8 settings for per-path/glob error ignores, allows packages to use the same `.flake8` as the rest of spack * A spack formatter plugin to flake8 that implements the behavior of `spack flake8` for direct invocations. Makes integration with developer tooling nicer, linting with flake8 reports only errors that `spack flake8` would report. Using pyls and pyls-flake8, or any other non-format-dependent flake8 integration, now works with spack's rules. * qa/flake8: allow star import of spack.pkgkit To get working completion of directives and spack components it's necessary to import the contents of spack.pkgkit. At the moment doing this makes flake8 displeased. For now, allow spack.pkgkit and spack both, next step is to ban spack * and require spack.pkgkit *. * first cut at refactoring spack flake8 This version still copies all of the files to be checked as befire, and some other things that probably aren't necessary, but it relies on the spack formatter plugin to implement the ignore logic. * keep flake8 from rejecting itself * remove separate packages flake8 config * fix failures from too many files I ran into this in the PR converting pkgkit to std. The solution in that branch does not work in all cases as it turns out, and all the workarounds I tried to use generated configs to get a single invocation of flake8 with a filename optoion to work failed. It's an astonishingly frustrating config option. Regardless, this removes all temporary file creation from the command and relies on the plugin instead. To work around the huge number of files in spack and still allow the command to control what gets checked, it scans files in batches of 100. This is a completely arbitrary number but was chosen to be safely under common line-length limits. One side-effect of this is that every 100 files the command will produce output, rather than only at the end, which doesn't seem like a terrible thing.
2020-12-18minimal zsh completion (#20253)Tom Scogland5-35/+74
Since zsh can load bash completion files natively, seems reasonable to just turn this on. The only changes are to switch from `type -t` which zsh doesn't support to using `type` with a regex and adding a new arm to the sourcing of the completions to allow it to work for zsh as well as bash. Could use more bash/dash/etc testing probably, but everything I've thought to try has worked so far. Notes: * unit-test zsh support, fix issues Specifically fixed word splitting in completion-test, use a different method to apply sh emulation to zsh loaded bash completion, and fixed an incompatibility in regex operator quoting requirements. * compinit now ignores insecure directories Completion isn't meant to be enabled in non-interactive environments, so by default compinit will ask the user if they want to ignore insecure directories or load them anyway. To pass the spack unit tests in GH actions, this prompt must be disabled, so ignore explicitly until a better solution can be found. * debug functions test also requires bash emulation COMP_WORDS is a bash-ism that zsh doesn't natively support, turn on emulation for just that section of tests to allow the comparison to work. Does not change the behavior of the functions themselves since they are already pinned to sh emulation elsewhere. * propagate change to .in file * fix comment and update script based on .in