Please note: If you have absolutely no time at all, at least read this one page. Please make sure, the following points are met.
passwd
.In order to meet the University’s demand for computing resources with a lot of CPUs and memory, LUIS is operating a cluster system as part of the service . All scientists of Leibniz University can use the cluster system for their research free of charge.
Fig. 1: Sketch of the main user-relevant components of the cluster system
Resources of the cluster system are largely a DFG major instrumentation. Therefore rules1) for DFG major instrumentation apply when using the cluster system. Project leaders of your EDV-Project bear responsibility to comply with the DFG rules.
In case no project exists: a project is the frame in which you will carry out your work on the cluster. To apply for a project, use the form ORG.BEN 4. In this form, you will need to specify the purpose of the computations you want to carry out on the cluster system as well as the type of work, e.g. a bachelor's or master's thesis or any other project of an institute. By signing the form, you agree to be bound by the terms of use that accompany the application and to be responsible for the accounts you create.
In order to apply for a project, you need to have the formal authority to sign. Students can only get an account while working at an institute.
Once the project has been approved, the project manager can log in to the BIAS website and create accounts (usernames). Usernames should reflect the real name of the user, if possible.
Note that user accounting on BIAS is not part of the service Scientific Computing and thus not part of the cluster system.
Parts of the cluster system are DFG major instrumentation, thus rules for DFG major instrumentation apply when using the cluster system. Furthermore software licenses are valid for research and teaching only. Accordingly the cluster system must only be used for research and teaching activities.
The cluster system contains the compute resources listed on this page: Computing Hardware. To access these nodes, you will need to generate so-called batch jobs, either by submitting a text file containing the job description or by configuring a job using the OpenOnDemand web portal we provide.
IMPORTANT: If you just log in to the cluster and run your programs directly on the login nodes (login.cluster.uni-hannover.de), you will not only use just a very small fraction of the power that's available. You'll also experience and generate all kinds of problems, depending on what you and other users do on the same node. Instead, you should either submit your work in a batch job via SLURM (see the corresponding chapter of this documentation) or use the OpenOnDemand web portal https://login.cluster.uni-hannover.de. Using the login nodes to compute stuff is absolutely not the way you should do your work. That would be similar to abusing a small hand-cart to transport a large amount of material over a large distance. One can do that, but it is very, very inefficient. To remind those who did not read this introduction and to protect other users, tasks that try to use more than 1800 cpu seconds on a login node will automatically get killed. The power of the cluster lies in the computing capabilities behind the login nodes, so please learn how to use it. It is, of course, okay to check out something small on a login node. But you should never try to run a real computation there.
As a side-note: institutes can ask to integrate their own hardware for use within the cluster system in a service called Forschungscluster-Housing (FCH). Hardware in that service is reserved for the respective institute, usually during work days between eight o’clock in the morning and eight o’clock in the evening. During night-time and on weekends, all cluster users have access to the resources, which means that jobs that ask for less than 12 hours of wall time have a high chance of running on an FCH node during the night (jobs with less than 60 hours can run on such a node during the weekend). So if you get directed to a machine that does not fit the name scheme of our main clusters during off-hours, that is most likely an FCH node. For information about placing your institute's hardware into FCH, please get in touch with us.