Loading README.md +3 −0 Original line number Diff line number Diff line Loading @@ -18,6 +18,9 @@ User authentication is done by _public-key cryptography_ using the key establish After a successful login, a short information message about the system will be displayed. You can also follow the documentation below to find more details. The cluster runs on Arch Linux and should be fully equipped for common scientific computations. If you need any additional software to be installed, please ask the administrator. ## Documentation - [Hardware overview](./doc/hardware-overview.md) Loading doc/jobs.md +10 −18 Original line number Diff line number Diff line Loading @@ -25,12 +25,9 @@ Not many options are needed for a basic serial job: ```bash #!/bin/bash # job name (default is the name of this file) #SBATCH --job-name=example # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --output=log.%x.job_%j # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --time=0:01:00 #SBATCH --job-name=example # job name (default is the name of this file) #SBATCH --output=log.%x.job_%j # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --time=0:01:00 # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --partition=gpXY # partition/queue name for the job submission #SBATCH --ntasks=1 # number of tasks/processes Loading Loading @@ -141,12 +138,9 @@ This can be achieved by exporting the `OMP_NUM_THREADS` according to the Slurm c ```bash #!/bin/bash # job name (default is the name of this file) #SBATCH --job-name=example-openmp # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --output=log.%x.job_%j # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --time=0:01:00 #SBATCH --job-name=example-omp # job name (default is the name of this file) #SBATCH --output=log.%x.job_%j # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --time=0:01:00 # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --partition=gpXY # partition/queue name for the job submission #SBATCH --ntasks=1 # number of tasks/processes Loading Loading @@ -178,12 +172,9 @@ For example: ```bash #!/bin/bash # job name (default is the name of this file) #SBATCH --job-name=example-mpi # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --output=log.%x.job_%j # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --time=0:01:00 #SBATCH --job-name=example-mpi # job name (default is the name of this file) #SBATCH --output=log.%x.job_%j # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --time=0:01:00 # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --partition=gpXY # partition/queue name for the job submission Loading Loading @@ -219,4 +210,5 @@ TODO: - monitoring jobs - https://hpc.nih.gov/docs/userguide.html#monitor - deleting jobs - https://hpc.nih.gov/docs/userguide.html#delete - job states - https://hpc.nih.gov/docs/userguide.html#states `sacct --starttime "now-1weeks"` - modifying jobs after submission - https://hpc.nih.gov/docs/userguide.html#modify_job Loading
README.md +3 −0 Original line number Diff line number Diff line Loading @@ -18,6 +18,9 @@ User authentication is done by _public-key cryptography_ using the key establish After a successful login, a short information message about the system will be displayed. You can also follow the documentation below to find more details. The cluster runs on Arch Linux and should be fully equipped for common scientific computations. If you need any additional software to be installed, please ask the administrator. ## Documentation - [Hardware overview](./doc/hardware-overview.md) Loading
doc/jobs.md +10 −18 Original line number Diff line number Diff line Loading @@ -25,12 +25,9 @@ Not many options are needed for a basic serial job: ```bash #!/bin/bash # job name (default is the name of this file) #SBATCH --job-name=example # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --output=log.%x.job_%j # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --time=0:01:00 #SBATCH --job-name=example # job name (default is the name of this file) #SBATCH --output=log.%x.job_%j # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --time=0:01:00 # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --partition=gpXY # partition/queue name for the job submission #SBATCH --ntasks=1 # number of tasks/processes Loading Loading @@ -141,12 +138,9 @@ This can be achieved by exporting the `OMP_NUM_THREADS` according to the Slurm c ```bash #!/bin/bash # job name (default is the name of this file) #SBATCH --job-name=example-openmp # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --output=log.%x.job_%j # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --time=0:01:00 #SBATCH --job-name=example-omp # job name (default is the name of this file) #SBATCH --output=log.%x.job_%j # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --time=0:01:00 # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --partition=gpXY # partition/queue name for the job submission #SBATCH --ntasks=1 # number of tasks/processes Loading Loading @@ -178,12 +172,9 @@ For example: ```bash #!/bin/bash # job name (default is the name of this file) #SBATCH --job-name=example-mpi # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --output=log.%x.job_%j # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --time=0:01:00 #SBATCH --job-name=example-mpi # job name (default is the name of this file) #SBATCH --output=log.%x.job_%j # file name for stdout/stderr (%x will be replaced with the job name, %j with the jobid) #SBATCH --time=0:01:00 # maximum wall time allocated for the job (D-H:MM:SS) #SBATCH --partition=gpXY # partition/queue name for the job submission Loading Loading @@ -219,4 +210,5 @@ TODO: - monitoring jobs - https://hpc.nih.gov/docs/userguide.html#monitor - deleting jobs - https://hpc.nih.gov/docs/userguide.html#delete - job states - https://hpc.nih.gov/docs/userguide.html#states `sacct --starttime "now-1weeks"` - modifying jobs after submission - https://hpc.nih.gov/docs/userguide.html#modify_job