You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- This should also be useful for other clusters where you want to use
6
-
components (e.g. MPI, compilers) from the `module` system. -->
7
-
8
-
!!! note
9
-
This guide is also applicable to other HPC clusters where users need to manage components such as MPI libraries, compilers, and other software through the `module` system.
10
5
11
6
12
7
## Introduction to Spack
@@ -16,6 +11,10 @@ A brief introduction to Spack will be added here.
16
11
17
12
## Setting up Spack
18
13
14
+
!!! note
15
+
The guide is also applicable to other HPC clusters where users need to manage components such as MPI libraries, compilers, and other software through the `module` system.
16
+
17
+
19
18
### Connection to a compute node
20
19
21
20
@@ -71,15 +70,17 @@ To make Spack available in your shell session, source its environment setup scri
71
70
```{ .sh .copy }
72
71
source$HOME/spack/share/spack/setup-env.sh
73
72
```
74
-
You may want to add this line to the file .`bashrc`for convenience.
73
+
For convenience, this line can be added to the .`bashrc` file to make Spack automatically available in every new shell session.
75
74
76
75
### Define System-Provided Packages
77
-
To avoid rebuilding packages already available as modules on your cluster (e.g., compilers, MPI, libraries), create a packages.yaml file under: `$HOME/.spack/packages.yaml`
76
+
77
+
`packages.yaml` A spack configuration file used to tell Spack what tools and versions already exist on the cluster, so Spack can use those instead of building everything again.Create a packages.yaml file under: `$HOME/.spack/packages.yaml`
0 commit comments