Commit e1fb6413 authored by Jakub Klinkovský's avatar Jakub Klinkovský
Browse files

doc: update to reflect cluster configuration changes

parent 0e0f1c67
Loading
Loading
Loading
Loading
+5 −5
Original line number Diff line number Diff line
@@ -8,18 +8,18 @@
[Intel Core i9-9900KF](https://ark.intel.com/content/www/us/en/ark/products/190887/intel-core-i9-9900kf-processor-16m-cache-up-to-5-00-ghz.html)
  (8 cores @ 3.6-5.0 GHz, 16 MiB cache)
- RAM:
  2× 16 GiB DDR4 2666 MT/s
  4× 16 GiB DDR4 2666 MT/s CL16
- GPU:
[Nvidia GeForce GTX 1080Ti](https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_10_series)
  (3584 cores @ 1.62 GHz, 11 GiB GDDR5X, compute capability 6.1)
- Local storage:
    - `/`: 120 GB SSD (KINGSTON SA400S37120G)
    - `/local/`: 4× 16 TB Seagate Exos X16 (RAID 0)
    - `/mnt/gp3/`: 4× 16 TB Seagate Exos X16 (RAID 0)

The `/local/` file system is __not backed up__ and since it is on RAID 0, even __a single drive failure would mean destruction of all data__.
The `/mnt/gp3/` file system is __not backed up__ and since it is on RAID 0, even __a single drive failure would mean destruction of all data__.
Hence, users are advised not to keep valuable data here or make their own backups if needed.

The `/local/` storage is shared with compute nodes over network.
The `/mnt/gp3/` storage is shared with compute nodes over network.

## Compute nodes (gp[11-14])

@@ -27,7 +27,7 @@ The `/local/` storage is shared with compute nodes over network.
[Intel Core i7-9800X](https://ark.intel.com/content/www/us/en/ark/products/189122/intel-core-i7-9800x-x-series-processor-16-5m-cache-up-to-4-50-ghz.html)
  (8 cores @ 3.8-4.5 GHz, 16 MiB cache)
- RAM:
  1× 16 GiB DDR4 2666 MT/s CL16
  4× 16 GiB DDR4 2666 MT/s CL16
- GPU:
[Nvidia GeForce RTX 2070 Super OC](https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_20_series)
  (2560 cores @ 1.78 GHz, 8 GiB GDDR6, compute capability 7.5)
+3 −3
Original line number Diff line number Diff line
@@ -122,7 +122,7 @@ For example, executing a job with `--ntasks-per-node=2`, `--gpus-per-task=1` and
System memory can be allocated as a consumable resource using the `--mem` option:

```bash
# how much RAM per node can be allocated for the job (default: 2000M, max: 15000M)
# how much RAM per node can be allocated for the job (default: 2G, max: 60G)
#SBATCH --mem=10G
```

@@ -148,7 +148,7 @@ This can be achieved by exporting the `OMP_NUM_THREADS` according to the Slurm c
#SBATCH --threads-per-core=1    # do not use hyperthreads (i.e. CPUs = physical cores below)
#SBATCH --cpus-per-task=8       # number of CPUs per process

# how much RAM per node can be allocated for the job (default: 2000M, max: 15000M)
# how much RAM per node can be allocated for the job (default: 2G, max: 60G)
#SBATCH --mem=10G

# start the job in the directory it was submitted from
@@ -186,7 +186,7 @@ For example:
#SBATCH --gpus-per-task=1       # number of GPUs per process
#SBATCH --gpu-bind=single:1     # bind each process to its own GPU (single:<tasks_per_gpu>)

# how much RAM per node can be allocated for the job (default: 2000M, max: 15000M)
# how much RAM per node can be allocated for the job (default: 2G, max: 60G)
#SBATCH --mem=10G

# start the job in the directory it was submitted from