uppmax introduction...+ snowy (old milou): ~ 200 nodes à 16 cores (128, 256 & 512 gb ram)...
TRANSCRIPT
![Page 1: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/1.jpg)
![Page 3: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/3.jpg)
Objectives
What is UPPMAX what it provides
Projects at UPPMAX
How to access UPPMAX
Jobs and queuing systems
How to use the resources of UPPMAX
How to use the resources of UPPMAX in a good way!Efficiency!!!
![Page 4: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/4.jpg)
UPPMAX
Uppsala Multidisciplinary Center for Advanced Computational Sciencehttp://www.uppmax.uu.se
2 (3) computer clusters
![Page 5: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/5.jpg)
UPPMAX
Uppsala Multidisciplinary Center for Advanced Computational Sciencehttp://www.uppmax.uu.se
2 (3) computer clusters
● Rackham: ~ 500 nodes à 20 cores (128, 256 & 1024 GB RAM) + Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM)
![Page 6: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/6.jpg)
UPPMAX
Uppsala Multidisciplinary Center for Advanced Computational Sciencehttp://www.uppmax.uu.se
2 (3) computer clusters
● Rackham: ~ 500 nodes à 20 cores (128, 256 & 1024 GB RAM) + Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM)
● Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster
![Page 7: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/7.jpg)
UPPMAX
Uppsala Multidisciplinary Center for Advanced Computational Sciencehttp://www.uppmax.uu.se
2 (3) computer clusters
● Rackham: ~ 500 nodes à 20 cores (128, 256 & 1024 GB RAM) + Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM)
● Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster
>12 PB fast parallel storage
![Page 8: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/8.jpg)
UPPMAX
Uppsala Multidisciplinary Center for Advanced Computational Sciencehttp://www.uppmax.uu.se
2 (3) computer clusters
● Rackham: ~ 500 nodes à 20 cores (128, 256 & 1024 GB RAM) + Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM)
● Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster
>12 PB fast parallel storage
Bioinformatics software
![Page 9: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/9.jpg)
UPPMAXThe basic structure of supercomputer
Login nodes
node = computer
![Page 10: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/10.jpg)
UPPMAXThe basic structure of supercomputer
Login nodes
![Page 11: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/11.jpg)
UPPMAXThe basic structure of supercomputer
Login nodes
![Page 12: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/12.jpg)
UPPMAXThe basic structure of supercomputer
Login nodesCompute and Storage
![Page 13: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/13.jpg)
Objectives
What is UPPMAX what it provides
Projects at UPPMAX
How to access UPPMAX
Jobs and queuing systems
How to use the resources of UPPMAX
How to use the resources of UPPMAX in a good way!Efficiency!!!
![Page 14: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/14.jpg)
Projects
UPPMAX provides its resources via
projects
![Page 15: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/15.jpg)
Projects
UPPMAX provides its resources via
projects
compute storage(core-hours/month) (GB)
![Page 16: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/16.jpg)
Projects
your project
![Page 17: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/17.jpg)
Projects
Two separate projects: SNIC compute:
cluster Rackham2000 - 100 000+ core-hours/month128 GB storage
UPPMAX Storage:storage system CREX1 - 100+ TB storage
![Page 18: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/18.jpg)
Projects
![Page 19: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/19.jpg)
Projects
![Page 20: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/20.jpg)
Projects
![Page 21: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/21.jpg)
Objectives
What is UPPMAX what it provides
Projects at UPPMAX
How to access UPPMAX
Jobs and queuing systems
How to use the resources of UPPMAX
How to use the resources of UPPMAX in a good way!Efficiency!!!
![Page 22: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/22.jpg)
How to access UPPMAX
SSH to a cluster
ssh -Y your_username@cluster_name.uppmax.uu.se
![Page 23: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/23.jpg)
How to access UPPMAX
SSH to Rackham
![Page 24: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/24.jpg)
SSH
![Page 25: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/25.jpg)
SSH
![Page 26: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/26.jpg)
How to use UPPMAX
Login nodes use them to access UPPMAXnever use them to run jobsdon’t even use them to do “quick stuff”
Calculation nodesdo your work here - testing and running
![Page 27: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/27.jpg)
How to use UPPMAX
Calculation nodesnot accessible directlySLURM (queueing system) gives you access
![Page 28: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/28.jpg)
Objectives
What is UPPMAX what it provides
Projects at UPPMAX
How to access UPPMAX
Jobs and queuing systems
How to use the resources of UPPMAX
How to use the resources of UPPMAX in a good way!Efficiency!!!
![Page 29: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/29.jpg)
Job
Job (computing)From Wikipedia, the free encyclopedia
For other uses, see Job (Unix) and Job stream.
In computing, a job is a unit of work or unit of execution (that performs said work). A component of a job (as a unit of work) is called a task or a step (if sequential, as in a job stream).
As a unit of execution, a job may be concretely identified with a single process, which may in turn have subprocesses (child processes; the process corresponding to the job being the
parent process) which perform the tasks or steps that comprise the work of the job; or with a process group; or with an abstract reference to a process or process group, as in Unix job
control.
![Page 30: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/30.jpg)
Job
Read/open files
Do something with the data
Print/save output
![Page 31: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/31.jpg)
Job
Read/open files
Do something with the data
Print/save output
![Page 32: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/32.jpg)
Job
The basic structure of a supercomputer Parallel computing
Not one super fast job
![Page 33: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/33.jpg)
Job
The basic structure of a supercomputer Parallel computing
Not one super fast
jobs
![Page 34: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/34.jpg)
Queue System
More users than nodesNeed for a queue
nodes - hundredsusers - thousands
![Page 35: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/35.jpg)
Queue System
More users than nodesNeed for a queue
![Page 36: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/36.jpg)
Queue System
More users than nodesNeed for a queue
![Page 37: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/37.jpg)
Queue System
More users than nodesNeed for a queue
![Page 38: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/38.jpg)
SLURM
workload managerjob queuebatch queuejob scheduler
SLURM (Simple Linux Utility for Resource Management)free and open source
![Page 39: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/39.jpg)
Objectives
What is UPPMAX what it provides
Projects at UPPMAX
How to access UPPMAX
Jobs and queuing systems
How to use the resources of UPPMAX
How to use the resources of UPPMAX in a good way!Efficiency!!!
![Page 40: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/40.jpg)
SLURM
1) Ask for resource and run jobs manuallyFor testing, possibly small jobs, specificprograms needing user input while running
2)Write a script and submit it to SLURMSubmits an automated job to the job queue, runs when it’s your turn
![Page 41: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/41.jpg)
SLURM
1) Ask for resource and run jobs manually
submit a request for resources
ssh to a calculation node
run programs
![Page 42: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/42.jpg)
SLURM
1) Ask for resource and run jobs manually
salloc -A g2019020 -p core -n 1 -t 00:05:00
salloc - commandmandatory job parameters:-A - project ID (who “pays”)-p - node or core (the type of resource)-n - number of nodes/cores-t - time
![Page 43: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/43.jpg)
SLURM-A this course project g2019020
you have to be a member
-p 1 node = 20 cores1 hour walltime = 20 core-hours
-n number of cores (default value = 1)-N number of nodes
-t format - hh:mm:ssdefault value= 7-00:00:00
jobs killed when time limit reaches - always overestimate ~ 50%
![Page 44: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/44.jpg)
SLURM
Information about your jobs squeue -u <user>
![Page 45: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/45.jpg)
SLURM
SSH to a calculation node (from a login node)
ssh -Y <node_name>
![Page 46: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/46.jpg)
SLURM
![Page 47: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/47.jpg)
SLURM
![Page 48: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/48.jpg)
SLURM1a) Ask for node/core and run jobs manually
Interactive - books a node and connects you to it interactive -A g2019020 -p core -n 1 -t 00:05:00
![Page 49: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/49.jpg)
SLURM
2) Write a script and submit it to SLURM
put all commands in a text file - script
tell SLURM to run the script (use the same job parameters)
![Page 50: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/50.jpg)
2) Write a script and submit it to SLURM
put all commands in a text file - script
SLURM
![Page 51: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/51.jpg)
SLURM
2) Write a script and submit it to SLURM
put all commands in a text file - script
job parameters
tasks to be done
![Page 52: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/52.jpg)
SLURM
2) Write a script and submit it to SLURM
put all commands in a text file - script
![Page 53: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/53.jpg)
2) Write a script and submit it to SLURM
tell SLURM to run the script(use the same job parameters)
sbatch test.sbatch
SLURM
![Page 54: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/54.jpg)
2) Write a script and submit it to SLURM
tell SLURM to run the script(use the same job parameters)
sbatch test.sbatch
sbatch - commandtest.sbatch - name of the script file
SLURM
![Page 55: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/55.jpg)
2) Write a script and submit it to SLURM
tell SLURM to run the script(use the same job parameters)
sbatch -A g2019020 -p core -n 1 -t 00:05:00 test.sbatch
SLURM
![Page 56: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/56.jpg)
SLURM OutputPrints to a file instead of terminal
slurm-<job id>.out
![Page 57: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/57.jpg)
Squeue
Shows information about your jobs squeue -u <user>
jobinfo -u <user>
![Page 58: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/58.jpg)
Queue System
SLURM user guidego to http://www.uppmax.uu.se/click User Guides (left-hand side menu)click Slurm user guide
or just google “uppmax slurm user guide”
link: http://www.uppmax.uu.se/support/user-guides/slurm-user-guide/
![Page 59: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/59.jpg)
UPPMAX Software
100+ programs installed
Managed by a 'module system'Installed, but hiddenManually loaded before use
■
module avail - Lists all available modulesmodule load <module name> - Loads the modulemodule unload <module name> - Unloads the modulemodule list - Lists loaded modulesmodule spider <word> - Searches all modules after 'word'
![Page 60: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/60.jpg)
UPPMAX Software
Most bioinfo programs hidden under bioinfo-toolsLoad bioinfo-tools first, then program module
or
![Page 61: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/61.jpg)
![Page 62: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/62.jpg)
UPPMAX Commands
uquota
![Page 63: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/63.jpg)
UPPMAX Commands
projinfo
![Page 64: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/64.jpg)
UPPMAX Commands
projplot -A <proj-id> (-h for more options)
![Page 65: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/65.jpg)
Objectives
What is UPPMAX what it provides
Projects at UPPMAX
How to access UPPMAX
Jobs and queuing systems
How to use the resources of UPPMAX
How to use the resources of UPPMAX in a good way!Efficiency!!!
![Page 66: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/66.jpg)
UPPMAX Commands
Plot efficiency
$ jobstats -p -A <projid>
![Page 67: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/67.jpg)
![Page 68: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/68.jpg)
![Page 69: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/69.jpg)
![Page 70: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/70.jpg)
Take-home messages
● The difference between user account and project● Login nodes are not for running jobs● SLURM gives you access to the compute nodes when you
specify a project that you are member of● Use interactive for quick jobs and for testing● Do not ask for more cores/nodes than your job can actually
use● A job script usually consists of:
Job settings (-A, -p, -n, -t)Modules to be loadedBash code to perform actionsRun a program, or multiple programs
![Page 71: UPPMAX Introduction...+ Snowy (old Milou): ~ 200 nodes à 16 cores (128, 256 & 512 GB RAM) Bianca: 200 nodes à 16 cores (128, 256 & 512 GB RAM) - virtual cluster >12 PB fast parallel](https://reader035.vdocuments.net/reader035/viewer/2022070218/61277d8901796311062634e5/html5/thumbnails/71.jpg)
Laboratory time! (again)
https://scilifelab.github.io/courses/ngsintro/1911/labs/uppmax-intro