Running Code_Saturne on the University clusters
On mace01, the software has been installed in /software/Code_Saturne. There are currently three versions available, v1.3.3, v1.4.0 and v2.0Beta2.
They have been compiled with med, cgns, and ccm (not1.3.3) support using the intel fortran compiler v10 and the gnu c compiler v3.4.6 and openmpi 1.3
Please note that the graphical user interface is disabled on all 3 version. You can still use the xml files generated by the GUI in your desktop and just uploaded to your DATA directory.
To run Code_Saturne in the MACE01 cluster you need to do the following:
- Update your .profile or .bash_profile file with the corresponding version path:
- v1.3.3: Add
. /software/Code_Saturne/v1.3.3/Noyau/ncs-1.3.3/bin/cs_profile to your profile file and execute it by typing
- v1.4.0: Add
. /software/Code_Saturne/v1.4.0/Noyau/ncs-1.4.0/bin/cs_profile to your profile file and execute it by typing
- v2.0Beta2: Add /software/Code_Saturne/v2.0B2/opt/ncs-2.0.0-beta2/bin to your PATH. Edit your .bash_profile file and add the line
PATH=$PATH:/software/Code_Saturne/v2.0B2/opt/ncs-2.0.0-beta2/bin before the line
- create your study and your case by typying:
cree_sat -etude nom_etude [nom_cas1 nom_cas2 ...][-noihm]
cs_create -study study_name [case_name1 case_name2 ...][-nogui]
cs create -s study_name [case_name1 case_name2 ...][--nogui]
- Once you have set up your case as you would in your desktop, edit the lance/runcase file to set up the queue parameters. For MACE01 the queue system is SGE (Sun Grid Engine). All the variables are set by starting the line with
#$. You will find them under the comment
University of Manchester Cluster in the lance/runcase file. You will see something like:
# BATCH FILE (University of Manchester Cluster)
# set the name of the job
#$ -N wing_keps
# request between 2 and 4 slots
#$ -pe orte.pe 2-4
#$ -q parallel-R5.q
# Execute the job from the current working directory
# Job output will appear in this directory
# can use -o dirname to redirect stdout
# can use -e dirname to redirect stderr
# Export these environment variables
#$ -v MPI_HOME
sets the parallel environment followed by the number of processors (from 2 to 4 in the example above). The flag
sets the queue where the job will run. In mace01 there are serial and parallel queues. To see what queues are available type
in a terminal.
Once you have edited the lance/runcase file you must submit it to the queueing system by typing
directory. If everything is ok you will see a submission message after you hit return. The message will contain the job id number.
If you want to check the status of you job use
. The output that you would normally get on the screen when running on your desktop machine will be redirected to a file with the case name and the id number (for example wing_keps.o2853). You will also have an error file and the corresponding parallel output and parallel error (wing_keps.e2853, wing_keps.po2853, wing_keps.pe2853) which are useful if you have any problems during the simulation.
For more information on the cluster see the rcs webpage on mace01
. Also have a look at their introduction for new users
Although the performance of Code_Saturne on parallel systems is very good, the mace01 cluster has not been build for massively parallel jobs and therefore does not have a good interconnectivity. This limits the communication speed between nodes for all parallel jobs (including Code_Saturne). As a rule of thumb, try to keep at least 100000 cells per processor when running parallel jobs to be sure you get a higher efficiency. If you use a very small number of cells per processor, the communication time can be larger than the computation time and it will take you LONGER than if you run the same job in a smaller number of processors.
On redqueen, the software is installed under /software/Code_Saturne using the intel compilers ifort v10.1 and icc 11.0-074 and openmpi 1.3. There are currently three versions available, v1.3.3, v1.4.0 and v2.0Beta2. It has support for cgns, med and ccm (not v1.3.3)
Redqueen also has the SGE queueing system so the set up is similar to the one described for MACE01 above.
The only difference is that the queue names are different. In redqueen
so you have to pick one of these and change the lance/runcase file (usually r2-mace-parallel-12-thin.q or r2-mace-parallel-12.q)
For more information on redqueen visit the rcs
On Redqueen2 (former hardware from mace01)
On redqueen2 the software is installed under /software/Code_Saturne/v1.4.0_rq2 and v2.0rc2
NOTE: For users moving from mace01:
You will need to change your SGE path. Add the following lines to your .profile or .bash_profile files:
export PATH SGE_ROOT
the queues available here are:
On ClusterVision Machine (usto-oran.me.umist.ac.uk)
Clustervision has the same SGE queueing system but in this machine is loaded by modules
To load the SGE module type :
module load sge
To see the modules that are loaded in your environment type:
To see the available modules type:
is installed in
Use the same procedure as described above to load the environmental variables (depending on the version).
The only available queue is called
the corresponding parallel environment is
Please note that his cluster should not be used to run serial jobs.
The ganglia monitroring system is installed and cn be access at http://188.8.131.52/ganglia-webfrontend/