===== Getting started with the Sleep Revolution slurm cluster ===== **Slurm** is a workload manager,scheduler system. It's build from computer nodes, 1 node is the Login Node, a master node and worker node's\\ The login node is called **cheetah.sleep.ru.is** ''130.208.209.30''.\\ The cluster is a linux environment, connect with SSH\\ \\ Open a terminal to log-in '' ssh @cheetah.sleep.ru.is''\\ Those who use Rivendell ''130.208.209.30'' (and other sleep servers) can also connect with any editor which supports SSH like [[https://code.visualstudio.com/|VScode]]\\ ==== About the user environment : ==== Each user has his home folder, the path is [/mount/home/]\\ The home folder should have 2 files, a readMe.lst file and template (slurm) script file (exampleJob.sh)\\ - The first step for users is to make sure their working environment fits their needs.\\ For those who are using Python, we recommend, for ease of use, using a Python virtual environment\\ User can then install different version of Python and all necessary modules for their work.\\ - However user can setup and install their Python modules locally without a Python virtual environment, by using Pip3 install \\ - The cluster gives users access to memory, cpu and GPU. The script file has instructing the slurm cluster what to do\\ - Users can change the slurm instructions in the script file, but it's not necessary for it to work, just append your command to it. - A example of how to execute your job on the cluster; first create a file __prufa.py__ in your home directory, append the line ''python3 prufa.py'' to exampleJob.sh \\ Then submit your job to the queue with command **sbatch exampleJob.sh** \\ Your template script file (exampleJob.sh) looks like this\\ ''#!/bin/bash\\ #SBATCH --account=staff\\ #SBATCH %%--%%job-name=sleepJob\\ #SBATCH %%--%%gpus-per-node=1\\ #SBATCH %%--%%mem-per-cpu=2G\\ #SBATCH --output=Slurm.log\\ '' \\ ===Useful commands=== ^cmd^Descr^ |sbatch |Submit job| |sacct|My jobs| |squeue|Show all jobs on the queue| |sinfo|Information about the cluster| |srun|Run job interactively| |scancel |kill job with | ===Example of Python virtual environment=== Python venv\\ Miniconda\\ Aaconda\\ [[cluster:usage_tips:advancedSlurm|Advanced tips]]\\