Age | Commit message (Collapse) | Author | Files | Lines |
|
* remove three commands that have been deprecated since v0.13.0
|
|
* Start moving toward a json buildcache index
* Add spec and database index schemas
* Add a schema for buildcache spec.yaml files
* Provide a mode for database class to generate buildcache index
* Update db and ci tests to validate object w/ new schema
* Remove unused temporary upload-s3 command
* Use database class to generate buildcache index
* Do not generate index with each buildcache creation
* Make buildcache index mode into a couple of constructor args to Database class
* Use keyword args for _createtarball
* Parse new json index when we get specs from buildcache
Now that only one index file per mirror needs to be fetched in
order to have all the concrete specs for binaries available on the
mirror, we can just fetch and refresh the cached specs every time
instead of needing to use the '-f' flag to force re-reading.
|
|
* add workaround for gitlab ci needs limit
* fix style/address review comments
* convert filter obj to list
* update command completion
* remove dict comprehension
* add workaround tests
* fix sorting issue between disparate types
* add indeces to format
|
|
spack config add <value>: add nested value value to the configuration scope specified
spack config remove/rm: remove specified configuration from the relevant scope
|
|
* Add ability to force removal of install failure tracking data through spack clean
* Add clean failures option to packaging guide
|
|
|
|
* Added unit tests to Github Actions
* Set user e-mail and name for git tests to succeed
* Simplify setup.sh logic
* Replicate Travis script on Github Actions
* Update flags since '.' is not allowed
* Added badge, simplified workflow
* Remove pinning of coverage
* Remove unit tests run on Github Actions from Travis
|
|
* add initial optimization script
* integrate optimization in spack ci
* make optimization opt-in
* fix import error
* flake8 fixes
* update command completion
* work around vermin errors
* fix sphynx errors
|
|
|
|
Also removes extraneous prompt and ssh handling logic.
|
|
This fixes a fork bomb in `spack versions`. Recursive generation of pools
to scrape URLs in `_spider` was creating large numbers of processes.
Instead of recursively creating process pools, we now use a single
`ThreadPool` with a concurrency limit.
More on the issue: having ~10 users running at the same time spack
versions on front-end nodes caused kernel lockup due to the high number
of sockets opened (sys-admin reports ~210k distributed over 3 nodes).
Users were internal, so they had ulimit -n set to ~70k.
The forking behavior could be observed by just running:
$ spack versions boost
and checking the number of processes spawned. Number of processes
per se was not the issue, but each one of them opens a socket
which can stress `iptables`.
In the original issue the kernel watchdog was reporting:
Message from syslogd@login03 at May 19 12:01:30 ...
kernel:Watchdog CPU:110 Hard LOCKUP
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756]
Message from syslogd@login03 at May 19 12:01:31 ...
kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
|
|
* add an --exclude-file option to 'spack mirror create' which allows a user to specify a file of specs to exclude when creating a mirror. this is anticipated to be useful especially when using the '--all' option
* allow specifying number of versions when mirroring all packages
* when mirroring all specs within an environment, include dependencies of root specs
* add '--exclude-specs' option to allow user to specify that specs should be excluded on the command line
* add test for excluding specs
|
|
* add subcommand `spack view copy/relocate`
* update bash completions
* add copy/relocate commands to view tests
* allow copied views to be removed
|
|
This change also adds a code path through the spack ci pipelines
infrastructure which supports PR testing on the Spack repository.
Gitlab pipelines run as a result of a PR (either creation or pushing
to a PR branch) will only verify that the packages in the environment
build without error. When the PR branch is merged to develop,
another pipeline will run which results in the generated binaries
getting pushed to the binary mirror.
|
|
|
|
Modifications:
- [x] Travis now uses `bionic` as a default (`xenial` used for Python 3.5, `trusty` for Python 2.6)
- [x] Shell unit tests have been factored into their own run
- [x] `kcov` is built only for tests that upload coverage results
Overall with this we shave 3-4 mins. on each run and add an additional run of about 3 min. For some reason `kcov` 38 fails forwarding output when used with Python unit tests, so I used v34 for that and v38 (latest) for shell testing. Previously we were using v25.
|
|
fixes #15145
This commit removes the outdated `spack bootstrap`
command and any reference to it in the documentation
and unit tests.
|
|
* Non-interactive mode for spack checksum; allow passing 'package@version' to spack checksum
* Flake8 fixes
* Update checksum.py
Fix typo
* Update spack-completion script
* Automatically set non-interactive mode if more than one version passed
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Add documentation and update spack-completion
* Flake8
* Rename option
* Update spack-completion
* Update lib/spack/spack/cmd/checksum.py
Co-Authored-By: Adam J. Stewart <ajstewart426@gmail.com>
* Update checksum.py
* Update stage.py
* Update create.py
Use batch mode when adding a new package
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
|
|
Add a `spack external find` command that tries to populate
`packages.yaml` with external packages from the user's `$PATH`. This
focuses on finding build dependencies. Currently, support has only been
added for `cmake`.
For a package to be discoverable with `spack external find`, it must define:
* an `executables` class attribute containing a list of
regular expressions that match executable names.
* a `determine_spec_details(prefix, specs_in_prefix)` method
Spack will call `determine_spec_details()` once for each prefix where
executables are found, passing in the path to the prefix and the path to
all found executables. The package is responsible for invoking the
executables and figuring out what type of installation(s) are in the
prefix, and returning one or more specs (each with version, variants or
whatever else the user decides to include in the spec).
The found specs and prefixes will be added to the user's `packages.yaml`
file. Providing the `--not-buildable` option will mark all generated
entries in `packages.yaml` as `buildable: False`
|
|
* dev-build: --drop-in <shell>
Add a `--drop-in <shell>` option to `spack dev-build`.
This option will automatically run a
`spack build-env <spec> -- <shell>` at the end of a `dev-build`, e.g.
to quickly drop-and-devel into a build phase of a package.
Example usage:
```
spack dev-build --before cmake --drop-in bash openpmd-api@develop
```
* build_env: drop in unit test
Co-authored-by: Greg Becker <becker33@llnl.gov>
|
|
Add `-b,--before` option to dev-build command to stop before the phase in question.
|
|
Since CMake can't build with GCC on MacOS, choose a
spec that doesn't have CMake in the DAG.
|
|
Since #16132, we've consolidated the setting of FORCE_UNSAFE_CONFIGURE to
`autotools.py`, so we don't need to use it in packages like `coreutils`,
in our commands, or in our container recipes.
- [x] Remove FORCE_UNSAFE_CONFIGURE from packages
- [x] Remove FORCE_UNSAFE_CONFIGURE from container recipes
- [x] Remove FORCE_UNSAFE_CONFIGURE from `spack ci` command
|
|
`DYLD_LIBRARY_PATH` can frequently break builtin macOS software when
pointed at Spack libraries. This is because it takes *higher* precedence
than the default library search paths, which are used by system software.
`DYLD_FALLBACK_LIBRARY_PATH`, on the other hand, takes lower precedence.
At first glance, this might seem bad, because the software installed by
Spack in an environment needs to find *its* libraries, and it should not
use the defaults. However, Spack's isntallations are always `RPATH`'d,
so they do not have this problem.
`DYLD_FALLBACK_LIBRARY_PATH` is thus useful for things built in an
environment that need to use Spack's libraries, that don't set *their*
RPATHs correctly for whatever reason. We now prefer it to
`DYLD_LIBRARY_PATH` in modules and in environments because it helps a
little bit, and it is much less intrusive.
|
|
* Implemented --first option for "spack load"
* added test for "spack load --first"
Co-authored-by: gragghia <gragghia@localhost.localdomain>
|
|
* Add --version arg to spack python command
* Add `spack debug report` command
|
|
If a user invoked "spack env activate example-henv", Spack would
mistakenly interpret the "-h" from "example-henv" as the "-h" option.
This commit allows users to create and activate environments with
"-h" in the name.
This issue existed for bash shell support as well as csh support, and
this commit addresses both, along with some other unrelated csh
support issues.
|
|
* add --skip-unstable-versions option to 'spack mirror create' which skips sources/resource for packages if their version is not stable (i.e. if they are the head of a git branch rather than a fixed commit)
* '--skip-unstable-versions' should skip all VCS sources/resources, not just those which are not cachable
|
|
* Recover coverage from subprocesses during unit tests
|
|
buildcaches. (#15090)
* Make -d directory a required option. Print messages about where buildcaches will be written.
* Add mutually exclusive required options
* spack commands --update-completion
* Apply @opadron's patch
* Update share/spack/spack-completion.bash
* Incorporate @opadron's suggestions
|
|
* add --only option to buildcache create cmd
replaces the --no-deps option
|
|
buildcaches on linux (#15192)
* Buildcache command: add install option -o/--otherarch
This will allow matching specs from other archs, for example
installing macOS buildcaches on linux hosts.
* spack commands --update-completion
|
|
* spack extensions prints list of extendable packages
* Update tab completion scripts
|
|
|
|
It's often useful to run a module with `python -m`, e.g.:
python -m pyinstrument script.py
Running a python script this way was hard, though, as `spack python` did
not have a similar `-m` option. This PR adds a `-m` option to `spack
python` so that we can do things like this:
spack python -m pyinstrument ./test.py
This makes it easy to write a script that uses a small part of Spack and
then profile it. Previously thee easiest way to do this was to write a
custom Spack command, which is often overkill.
|
|
This commit introduces a `--no-check-signature` option for
`spack install` so that unsigned packages can be installed. It is
off by default (signatures required).
|
|
@version. (#14732)
|
|
This PR adds a new command to Spack:
```console
$ spack containerize -h
usage: spack containerize [-h] [--config CONFIG]
creates recipes to build images for different container runtimes
optional arguments:
-h, --help show this help message and exit
--config CONFIG configuration for the container recipe that will be generated
```
which takes an environment with an additional `container` section:
```yaml
spack:
specs:
- gromacs build_type=Release
- mpich
- fftw precision=float
packages:
all:
target: [broadwell]
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Select from a valid list of images
base:
image: "ubuntu:18.04"
spack: prerelease
# Additional system packages that are needed at runtime
os_packages:
- libgomp1
```
and turns it into a `Dockerfile` or a Singularity definition file, for instance:
```Dockerfile
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:prerelease as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs build_type=Release" \
&& echo " - mpich" \
&& echo " - fftw precision=float" \
&& echo " packages:" \
&& echo " all:" \
&& echo " target:" \
&& echo " - broadwell" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " concretization: together" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps and strip executables
RUN cd /opt/spack-environment && spack install && spack autoremove -y
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:18.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN apt-get -yqq update && apt-get -yqq upgrade \
&& apt-get -yqq install libgomp1 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
```
|
|
(#14659)
* Limit the number of spec flies downloaded to find matches
|
|
* Add spack config list command for tab completion
* Update tab completion scripts
|
|
Instead of another script, this adds a simple argument to `spack
commands` that updates the completion script. Developers can now just
run:
spack commands --update-completion
This should make it simpler for developers to remember to run this
*before* the tests fail. Also, this version tab-completes.
|
|
Previously the `spack load` command was a wrapper around `module load`. This required some bootstrapping of modules to make `spack load` work properly.
With this PR, the `spack` shell function handles the environment modifications necessary to add packages to your user environment. This removes the dependence on environment modules or lmod and removes the requirement to bootstrap spack (beyond using the setup-env scripts).
Included in this PR is support for MacOS when using Apple's System Integrity Protection (SIP), which is enabled by default in modern MacOS versions. SIP clears the `LD_LIBRARY_PATH` and `DYLD_LIBRARY_PATH` variables on process startup for executables that live in `/usr` (but not '/usr/local', `/System`, `/bin`, and `/sbin` among other system locations. Spack cannot know the `LD_LIBRARY_PATH` of the calling process when executed using `/bin/sh` and `/usr/bin/python`. The `spack` shell function now manually forwards these two variables, if they are present, as `SPACK_<VAR>` and recovers those values on startup.
- [x] spack load/unload no longer delegate to modules
- [x] refactor user_environment modification calculations
- [x] update documentation for spack load/unload
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
This PR adds a `--format=bash` option to `spack commands` to
auto-generate the Bash programmable tab completion script. It can be
extended to work for other shells.
Progress:
- [x] Fix bug in superclass initialization in `ArgparseWriter`
- [x] Refactor `ArgparseWriter` (see below)
- [x] Ensure that output of old `--format` options remains the same
- [x] Add `ArgparseCompletionWriter` and `BashCompletionWriter`
- [x] Add `--aliases` option to add command aliases
- [x] Standardize positional argument names
- [x] Tests for `spack commands --format=bash` coverage
- [x] Tests to make sure `spack-completion.bash` stays up-to-date
- [x] Tests for `spack-completion.bash` coverage
- [x] Speed up `spack-completion.bash` by caching subroutine calls
This PR also necessitates a significant refactoring of
`ArgparseWriter`. Previously, `ArgparseWriter` was mostly a single
`_write` method which handled everything from extracting the information
we care about from the parser to formatting the output. Now, `_write`
only handles recursion, while the information extraction is split into a
separate `parse` method, and the formatting is handled by `format`. This
allows subclasses to completely redefine how the format will appear
without overriding all of `_write`.
Co-Authored-by: Todd Gamblin <tgamblin@llnl.gov>
|
|
The pathadd function was using setopt to configure zsh for word
splitting, which leaks out of the function and breaks default
functionality in a number of external zsh plugins and packages. This
switches to emulate -L, just as the spack function uses, to keep the
setting local to the function.
|
|
|
|
|
|
Previously, `spack test` automatically passed all of its arguments to
`pytest -k` if no options were provided, and to `pytest` if they were.
`spack test -l` also provided a list of test filenames, but they didn't
really let you completely narrow down which tests you wanted to run.
Instead of trying to do our own weird thing, this passes `spack test`
args directly to `pytest`, and omits the implicit `-k`. This means we
can now run, e.g.:
```console
$ spack test spec_syntax.py::TestSpecSyntax::test_ambiguous
```
This wasn't possible before, because we'd pass the fully qualified name
to `pytest -k` and get an error.
Because `pytest` doesn't have the greatest ability to list tests, I've
tweaked the `-l`/`--list`, `-L`/`--list-long`, and `-N`/`--list-names`
options to `spack test` so that they help you understand the names
better. you can combine these options with `-k` or other arguments to do
pretty powerful searches.
This one makes it easy to get a list of names so you can run tests in
different orders (something I find useful for debugging `pytest` issues):
```console
$ spack test --list-names -k "spec and concretize"
cmd/env.py::test_concretize_user_specs_together
concretize.py::TestConcretize::test_conflicts_in_spec
concretize.py::TestConcretize::test_find_spec_children
concretize.py::TestConcretize::test_find_spec_none
concretize.py::TestConcretize::test_find_spec_parents
concretize.py::TestConcretize::test_find_spec_self
concretize.py::TestConcretize::test_find_spec_sibling
concretize.py::TestConcretize::test_no_matching_compiler_specs
concretize.py::TestConcretize::test_simultaneous_concretization_of_specs
spec_dag.py::TestSpecDag::test_concretize_deptypes
spec_dag.py::TestSpecDag::test_copy_concretized
```
You can combine any list option with keywords:
```console
$ spack test --list -k microarchitecture
llnl/util/cpu.py modules/lmod.py
```
```console
$ spack test --list-long -k microarchitecture
llnl/util/cpu.py::
test_generic_microarchitecture
modules/lmod.py::TestLmod::
test_only_generic_microarchitectures_in_root
```
Or just list specific files:
```console
$ spack test --list-long cmd/test.py
cmd/test.py::
test_list test_list_names_with_pytest_arg
test_list_long test_list_with_keywords
test_list_long_with_pytest_arg test_list_with_pytest_arg
test_list_names
```
Hopefully this stuff will help with debugging test issues.
- [x] make `spack test` send args directly to `pytest` instead of trying
to do fancy things.
- [x] rework `--list`, `--list-long`, and add `--list-names` to make
searching for tests easier.
- [x] make it possible to mix Spack's list args with `pytest` args
(they're just fancy parsing around `pytest --collect-only`)
- [x] add docs
- [x] add tests
- [x] update spack completion
|
|
|
|
|
|
This PR moves build smoke tests from TravisCI and migrates them to Github Actions. The result is that build tests are performed in parallel with unit tests and they don't hog additional resources on Travis. The workflow will not run if a PR only changes packages in the built-in repository, but will always run on pushes to develop or master.
* Removed build tests from Travis and passed them to Github Actions
* Store ~/.ccache in Github Actions cache
* Add filters on paths and make sure this workflow don't run
* Use paths-ignore and exclude only files in the built-in repo
* Added a badge to README.md
|