Skip to content
Snippets Groups Projects
Commit f7543857 authored by Anna Lanteri's avatar Anna Lanteri
Browse files

test

parent 408a4c8e
No related branches found
No related tags found
No related merge requests found
#!/bin/sh -e
#############################################################################
### xmessy_mmd: UNIVERSAL RUN-SCRIPT FOR MESSy models
### (Author: Patrick Joeckel, DLR-IPA, 2009-2019) [version 2.54.0]
###
### TYPE xmessy_mmd -h for more information
#############################################################################
###
### NOTES:
### * -e (first line): exit on error = (equivalent to "set -e")
### * run/submit this script from where you want to have the log-files
### - best with absolute path from WORKDIR
### * options:
### -h : print help and exit
### -c : clean up (run within WORKDIR)
### (e.g., after crash before init_restart)
#############################################################################
###
#############################################################################
### EMBEDDED FLAGS FOR SGE (SUN GRID ENGINE)
### SUBMIT WITH: qsub [-q <queue>] xmessy_mmd
### SYNTAX: \#\$<SPACE>\-
#############################################################################
# ################# shell to use
# #$ -S /bin/sh
# ################# set submit-dir to current dir
# #$ -cwd
# ################# export all environment variables to job-script
# #$ -V
# ################# path and name of the log file
# #$ -o $JOB_NAME.$JOB_ID.log
# ################# join standard out and error stream (y/n) ?
# #$ -j y
# ################# send an email at end of job
# ### #$ -m e
# ################# notify me about pending SIG_STOP and SIG_KILL
# ### #$ -notify
# ################ (activate on grand at MPICH)
# ### #$ -pe mpi 8
# ################ (activate on a*/c* at RZG)
# ### #$ -pe mpich 4
# ### #$ -l h_cpu=01:00:00
# ################ (activate on rio* at RZG)
# ### #$ -pe mvapich2 4
# ################ (activate on tornado at DKRZ)
# ### #$ -pe orte 16
# ################ (activate one (!) block on mpc01 at RZG (12 cores/node))
# ###### serial job
# ### #$ -l h_vmem=4G # (virtual memory; max 8G)
# ### #$ -l h_rt=43200 # (max 43200s = 12 h wall-clock)
# ###### debug job
# #$ -P debug # always explicit
# #$ -l h_vmem=4G # (virtual memory per slot; max 48G/node)
# #$ -l h_rt=1800 # (max 1800s = 30 min wall-clock)
# #$ -pe impi_hydra_debug 12 # max 12 cores (= 1 node)
# ###### production job
# ### #$ -l h_vmem=4G # (virtual memory per slot; max 48G/node)
# ### #$ -l h_rt=43200 # (max 86400s = 24 h wall-clock)
# ### #$ -pe impi_hydra 48 # only multiples of 12 cores; max 192
#############################################################################
###
#############################################################################
### EMBEDDED FLAGS FOR PBS Pro
### SUBMIT WITH: qsub [-q <queue>] xmessy_mmd
### SYNTAX: \#\P\B\S<SPACE>\-
### NOTE: comment out NQSII macros below
#############################################################################
# ################# shell to use
# #PBS -S /bin/sh
# ################# export all environment variables to job-script
# #PBS -V
# ################# name of the log file
# ### #PBS -o ./
# #PBS -o ./$PBS_JOBNAME.$PBS_JOBID.log
# ################# join standard and error stream (oe, eo) ?
# #PBS -j oe
# ################# do not rerun job if system failure occurs
# #PBS -r n
# ################# send e-mail when [(a)borting|(b)eginning|(e)nding] job
# ### #PBS -m ae
# ### #PBS -M my_userid@my_institute.my_toplevel_domain
# ################# (activate on planck at Cyprus Institute)
# ### #PBS -l nodes=10:ppn=8,walltime=24:00:00
# ################# (activate on louhi at CSC)
# ### #PBS -l walltime=48:00:00
# ### #PBS -l mppwidth=256
# ################# (activate on Cluster at DLR, ppn=12 (pa1) ppn=24 (pa2)
# ### tasks per node!)
# ### #PBS -l nodes=1:ppn=12
# #PBS -l nodes=2:ppn=24
# #PBS -l walltime=04:00:00
# ################ (activate on Cluster at TU Delft, 12 nodes a 20 cores)
# ### #PBS -l nodes=1:ppn=16:typei
# ### #PBS -l walltime=48:00:00
#############################################################################
###
#############################################################################
### EMBEDDED FLAGS FOR NQSII
### SUBMIT WITH: qsub [-q <queue>] xmessy_mmd
### SYNTAX: \#\P\B\S<SPACE>\-
### NOTE: comment out PBS Pro macros above
#############################################################################
### #
### ################# common (partly user specific!):
### ### #PBS -S /bin/sh # shell to use (DO NOT USE! BUG on SX?)
### #PBS -V # export all environment variables to job-script
### ### #PBS -N test # job name
### ### #PBS -o # name of the log file
### #PBS -j o # join standard and error stream to (o, e) ?
### ### #PBS -m e # send an email at end of job
### ### #PBS -M Patrick.Joeckel@dlr.de # e-mail address
### #PBS -A s20550 # account code, see login message
### ################# resources:
### #PBS -T mpisx # SX MPI
### #PBS -q dq
### #PBS -l cpunum_job=16 # cpus per Node
### #PBS -b 1 # number of nodes, max 4 at the moment
### #PBS -l elapstim_req=12:00:00 # max wallclock time
### #PBS -l cputim_job=192:00:00 # max accumulated cputime per node
### #PBS -l cputim_prc=11:55:00 # max accumulated cputime per node
### #PBS -l memsz_job=500gb # memory per node
### #
#############################################################################
###
#############################################################################
### EMBEDDED FLAGS FOR SLURM
### SUBMIT WITH: sbatch xmessy_mmd
### SYNTAX: \#\S\B\A\T\C\H\<SPACE>\-\-
### NOTE: comment out NQSII and PBS Pro macros above
#############################################################################
################# shell to use
### #SBATCH -S /bin/sh
### #SBATCH -S /bin/bash
################# export all environment variables to job-script
#SBATCH --export=ALL
################# name of the log file
#SBATCH --job-name=xmessy_mmd.MMD38008
#SBATCH -o ./xmessy_mmd.%j.out.log
#SBATCH -e ./xmessy_mmd.%j.err.log
#SBATCH --mail-type=END
#SBATCH --mail-user=anna.lanteri@dlr.de
################# do not rerun job if system failure occurs
#SBATCH --no-requeue
# ################# (activate on mistral @ DKRZ)
# ### PART 1a: (activate for phase 1)
# #SBATCH --partition=compute # Specify partition name for job execution
# #SBATCH --ntasks-per-node=24 # Specify max. number of tasks on each node
# #SBATCH --cpus-per-task=2 # use 2 CPUs per task, so do not use HyperThreads
# # ### PART 1b: (activate for phase 2)
# # #SBATCH --partition=compute2 # Specify partition name for job execution
# # #SBATCH --ntasks-per-node=36 # Specify max. number of tasks on each node
# # #SBATCH --cpus-per-task=2 # use 2 CPUs per task, no HyperThreads
# # ### #SBATCH --mem=124000 # only, if you need real big memory
# ### PART 2: modify according to your requirements:
# #SBATCH --nodes=2 # Specify number of nodes
# #SBATCH --time=00:30:00 # Set a limit on the total run time
# # #SBATCH --account=bb0677 # Charge resources on this project account
# ###
################# (activate on levante @ DKRZ)
# ### PART 1: (activate always)
#SBATCH --partition=compute # Specify partition name for job execution
#SBATCH --ntasks-per-node=128
### #SBATCH --cpus-per-task=2 # use 2 CPUs per task, so do not use HyperThreads
#SBATCH --exclusive
# ### PART 2: modify according to your requirements:
#SBATCH --nodes=4
#SBATCH --time=02:00:00
#SBATCH --account=bb1361 # Charge resources on this project account
#SBATCH --constraint=512G
#SBATCH --mem=0
# ###
################# (activate on CARA @ DLR)
### # ### PART 1: (select node type)
### #SBATCH --export=ALL,MSH_DOMAIN=cara.dlr.de
### #SBATCH --partition=naples128 # 128 Gbyte/node memory
### ### #SBATCH --partition=naples256 # 256 Gbyte/node memory
### #SBATCH --ntasks-per-node=32 # Specify max. number of tasks on each node
### #SBATCH --cpus-per-task=2 # use 2 CPUs per task, so do not use HyperThreads
### #
### ### PART 2: modify according to your requirements:
### #SBATCH --nodes=1 # Specify number of nodes
### #SBATCH --time=00:05:00 # Set a limit on the total run time
### #SBATCH --account=2277003 # Charge resources on this project account
### ###
################# (activate on SuperMUC-NG @ LRZ)
### PART 1: do not change
# #SBATCH --get-user-env
# #SBATCH --constraint="scratch&work"
# #SBATCH --ntasks-per-node=48
# ### PART 2: modify according to your requirements:
# #SBATCH --partition=test
# #SBATCH --nodes=2 # Specify number of nodes
# #SBATCH --time=00:30:00
# #SBATCH --account=pr94ri
###
################# (activate on Jureca @ JSC)
### PART 1: do not change
# #SBATCH --ntasks-per-node=24 # Specify max. number of tasks on each node
# ##SBATCH --cpus-per-task=2 # use 2 CPUs per task, do not use HyperThreads
### PART 2: modify according to your requirements:
### development
# #SBATCH --partition=devel # Specify partition name for job execution
# #SBATCH --nodes=8 # Specify number of nodes
# #SBATCH --time=02:00:00 # Set a limit on the total run time
### production
# #SBATCH --partition=batch # Specify partition name for job execution
# #SBATCH --nodes=10 # Specify number of nodes
# #SBATCH --time=06:00:00 # Set a limit on the total run time
### production fat jobs
# #SBATCH --gres=mem512 # Request generic resources
# #SBATCH --partition=mem512 # Specify partition name for job execution
# #SBATCH --nodes=1 # Specify number of nodes
# #SBATCH --time=24:00:00 # Set a limit on the total run time
###
################# (activate on JUWELS Cluster @ JSC)
# #SBATCH --account=esmtst
### PART 1 do not change
### No SMT
# #SBATCH --ntasks-per-node=48 # Specify max. number of tasks on each CPU node
# #SBATCH --ntasks-per-node=40 # GPU nodes on the cluster have only 40 cores available
### Fore use with SMT
# #SBATCH --ntasks-per-node=96 # Specify max. number of tasks on each CPU node
# #SBATCH --ntasks-per-node=80 # Specify max. number of tasks on each GPU node
### PART 2: modify according to your requirements:
### default nodes have 96 GB of memory for 48 cores (2 GB per core)
### devel is using mem96 nodes only.
### mem192, gpu and develgpu uses only mem192 nodes
###
### development
### - devel : 1 (min) - 8 (max) nodes, 24 hours (normal), 6 hours (nocont)
# #SBATCH --partition=devel # Specify partition name for job execution
# #SBATCH --nodes=8 # Specify number of nodes
# #SBATCH --time=02:00:00 # Set a limit on the total run time
### production
### - batch : 1 (min) - 256 (max) nodes, 24 hours (normal), 6 hours (nocont)
# #SBATCH --partition=batch # Specify partition name for job execution
# #SBATCH --nodes=10 # Specify number of nodes
# #SBATCH --time=06:00:00 # Set a limit on the total run time
### production fat jobs
### - mem192: 1 (min) - 64 (max) nodes, 24 hours (normal), 6 hours (nocont)
# #SBATCH --partition=mem192 # Specify partition name for job execution
# #SBATCH --nodes=1 # Specify number of nodes
# #SBATCH --time=24:00:00 # Set a limit on the total run time
### GPU jobs
### - gpus : 1 (min) - 48 (max) nodes, 24 hours (normal), 6 hours (nocont)
# #SBATCH --partition=gpus # Specify partition name for job execution
# #SBATCH --nodes=1 # Specify number of nodes
# #SBATCH --time=24:00:00 # Set a limit on the total run time
# #SBATCH --gres=gpu:4 # select number of GPUs, between 1 and 4
# #SBATCH --cuda-mps # Activate Cuda multi-process service
### DEVEL GPU jobs
### -develgpus : 1 (min) - 2 (max) nodes, 24 hours (normal), 6 hours (nocont)
# #SBATCH --partition=develgpus # Specify partition name for job execution
# #SBATCH --nodes=1 # Specify number of nodes
# #SBATCH --time=24:00:00 # Set a limit on the total run time
# #SBATCH --gres=gpu:4 # select number of GPUs, between 1 and 4
# #SBATCH --cuda-mps # Activate Cuda multi-process service
###
################# (activate on JUWELS Booster @ JSC)
# #SBATCH --account=esmtst
### PART 1 do not change
### No SMT
# #SBATCH --ntasks-per-node=24 # Specify max. number of tasks on each node
### Fore use with SMT
# #SBATCH --ntasks-per-node=96 # Specify max. number of tasks on each node
### PART 2: modify according to your requirements:
### default nodes have 512 GB of memory for 24 cores cores on 2 sockets each
###
### development
### - develbooster : 1 (min) - 4 (max) nodes, 2 hours (max)
# #SBATCH --partition=develbooster # Specify partition name for job execution
# #SBATCH --nodes=1 # Specify number of nodes
# #SBATCH --time=00:30:00 # Set a limit on the total run time
# #SBATCH --gres=gpu:4 # select number of GPUs, between 1 and 4
# #SBATCH --cuda-mps # Activate Cuda multi-process service
### production
### - batch : 1 (min) - 384 (max) nodes, 24 hours (normal), 6 hours (nocont)
# #SBATCH --partition=booster # Specify partition name for job execution
# #SBATCH --nodes=10 # Specify number of nodes
# #SBATCH --time=06:00:00 # Set a limit on the total run time
# #SBATCH --gres=gpu:4 # select number of GPUs, between 1 and 4
# #SBATCH --cuda-mps # Activate Cuda multi-process service
###
################# (activate on thunder @ zmaw)
### #SBATCH --partition=mpi-compute
### #SBATCH --tasks-per-node=16
### #SBATCH --nodes=1
### #SBATCH --time=00:30:00
###
################################## (activate on gaia @ RZG)
### #SBATCH -D ./
### #SBATCH -J test
### #SBATCH --partition=p.24h
####### MAX 5 NODES
### #SBATCH --nodes=1
### #SBATCH --tasks-per-node=40
### #SBATCH --cpus-per-task=1
### #SBATCH --mail-type=none
### # Wall clock Limit:
### #SBATCH --time=24:00:00
################################## (activate on cobra @ RZG)
### #SBATCH -D ./
### #SBATCH -J test
### #SBATCH --partition=medium
### #SBATCH --nodes=5
### #SBATCH --tasks-per-node=40
### #SBATCH --cpus-per-task=1
### #SBATCH --mail-type=none
### # Wall clock Limit:
### #SBATCH --time=24:00:00
#################
###
################# (activate on mogon @ uni-mainz)
# #SBATCH --time=05:00:00
# #SBATCH --nodes=1
# # ############### for MOGON II
# #SBATCH --mem 64G
# #SBATCH --partition=parallel
# #SBATCH -A m2_esm
# #SBATCH --tasks-per-node=40
###
################# (activate on Cartesius @ Surfsara)
# #SBATCH --export=ALL,MSH_DOMAIN=cartesius.surfsara.nl
# #SBATCH -t 1-00:00 #Time limit after which job will be killed. Format: HH:MM:SS or D-HH:MM
# #SBATCH --nodes=1 1 #Number of nodes is 1
# #SBATCH --account=tdcei441
# #SBATCH --hint=nomultithread
# #SBATCH --ntasks-per-node=24
# #SBATCH --cpus-per-task=1
# #SBATCH --constraint=haswell
# #SBATCH --partition=broadwell
# ### #SBATCH --mem=200G
###
################# (activate on buran @ IGCE)
### HW layout: 2 nodes x 2 sockets x 8/16 cores/threads (up to 32PEs per node)
# #SBATCH --account=messy
# #SBATCH --partition=compute # up to 24h @ compute partition
# #SBATCH --cpus-per-task=1 # 1/2: enables/disables hyperthreading
# #SBATCH --nodes=2 # set explicitely
# #SBATCH --ntasks=64 # set explicitely
###
#############################################################################
###
#############################################################################
### EMBEDDED FLAGS FOR LL (LOAD LEVELER)
### SUBMIT WITH: llsubmit xmessy_mmd
### SYNTAX: \#[<SPACES>]\@
#############################################################################
################# shell to use
# @ shell = /bin/sh
################# export all environment variables to job-script
# @ environment = COPY_ALL
################# standard and error stream
# @ output = ./$(base_executable).$(jobid).$(stepid).out.log
# @ error = ./$(base_executable).$(jobid).$(stepid).err.log
################# send an email (always|error|start|never|complete)
# @ notification = never
# @ restart = no
################# (activate at CMA)
# # initialdir= ...
# # comment = WRF
# # network.MPI = sn_all,not_shared,us
# # job_type = parallel
# # rset = rset_mcm_affinity
# # mcm_affinity_options = mcm_accumulate
# # tasks_per_node = 32
# # node = 4
# # node_usage= not_shared
# # resources = ConsumableMemory(7500mb)
# # task_affinity = core(1)
# # wall_clock_limit = 08:00:00
# # class = normal
# # #class = largemem
################# (activate on p5 at RZG)
# # requirements = (Arch == "R6000") && (OpSys >= "AIX53") && (Feature == "P5")
# # job_type = parallel
# # tasks_per_node = 8
# # node = 1
# # node_usage= not_shared
# # resources = ConsumableCpus(1)
# # resources = ConsumableCpus(1) ConsumableMemory(5200mb)
# # wall_clock_limit = 24:00:00
################# (activate on vip or hydra at RZG)
# # network.MPI = sn_all,not_shared,us
# # job_type = parallel
# # node_usage= not_shared
# # restart = no
# # tasks_per_node = 32
# # node = 1
# # resources = ConsumableCpus(1)
# # # resources = ConsumableCpus(1) ConsumableMemory(1600mb)
# # # resources = ConsumableCpus(1) ConsumableMemory(3600mb)
# # wall_clock_limit = 24:00:00
################# (activate on blizzard at DKRZ)
##### always
# # network.MPI = sn_all,not_shared,us
# # job_type = parallel
# # rset = rset_mcm_affinity
# # mcm_affinity_options = mcm_accumulate
##### select one block below
#
# # tasks_per_node = 16
# # node = 1
# # node_usage= shared
# # resources = ConsumableMemory(1500mb)
# # task_affinity = core(1)
# # wall_clock_limit = 00:15:00
# # class = express
#
# # tasks_per_node = 32
# # node = 4
# # node_usage= not_shared
# # resources = ConsumableMemory(1500mb)
# # task_affinity = core(1)
# # wall_clock_limit = 08:00:00
#
# # tasks_per_node = 64
# # node = 2
# # node_usage= not_shared
# # resources = ConsumableMemory(750mb)
# # task_affinity = cpu(1)
# # wall_clock_limit = 08:00:00
#
##### blizzard only, account no (mm0085, mm0062, bm0273, bd0080, bd0617)
# # account_no = bd0080
#
################# (activate on huygens at SARA)
# # network.MPI = sn_all,not_shared,us
# # job_type = parallel
# # requirements=(Memory > 131072)
# # tasks_per_node = 32
# # node = 2
# # wall_clock_limit = 24:00:00
#
################# (activate on sp at CINECA)
# # job_type = parallel
# # total_tasks = 256
# # blocking = 64
# # wall_clock_limit = 48:00:00
#
# # job_type = parallel
# # total_tasks = 64
# # blocking = 32
# # wall_clock_limit = 05:00:00
#
################# (activate on SuperMUC / SuperMUC-fat at LRZ)
##### always
# # network.MPI = sn_all,not_shared,us
### activate 'parallel' for IBM poe (default!); 'MPICH' only to use Intel MPI:
# # job_type = parallel
# % job_type = MPICH
#
##### select (and modify) one block below
### SuperMUC-fat (for testing, 40 cores, 1 node)
# # class = fattest
# # node = 1
# # tasks_per_node = 40
# # wall_clock_limit = 00:30:00
#
### SuperMUC-fat (for production, 40 cores/node)
# # class = fat
# # node = 2
# # tasks_per_node = 40
# # wall_clock_limit = 48:00:00
#
### SuperMUC (for testing, 16 cores, 1 node)
# # node_topology = island
# % island_count = 1
# # class = test
# # node = 1
# # tasks_per_node = 16
# # wall_clock_limit = 1:00:00
#
### SuperMUC (for production, 16 cores/node)
# # node_topology = island
# % island_count = 1
# # class = micro
# # node = 4
# # tasks_per_node = 16
# # wall_clock_limit = 48:00:00
#
################# MULTI-STEP JOBS
# # step_name = step00
################# queue job (THIS MUST ALWAYS BE THE LAST LL-COMMAND !)
# @ queue
################# INSERT MULTI-STEP JOB DEPENDENCIES HERE
#
################# no more LL options below
#
#############################################################################
###
#############################################################################
### EMBEDDED FLAGS FOR MOAB
### SUBMIT WITH: msub [-q <queue>] xmessy_mmd
### SYNTAX: \#\M\S\U\B<SPACE>\-
### NOTE: ALL other scheduler macros need to be deactivated
### LL: '# (a)' -> '# #' ; all others: '### '
#############################################################################
### ### send mail: never, on abort, beginning or end of job
### #MSUB -M <mail-address>
### #MSUB -m n|a|b|e
# #MSUB -N xmessy_mmd
# #MSUB -j oe
################# # of nodes : # of cores/node
# #MSUB -l nodes=2:ppn=4
# #MSUB -l walltime=00:30:00
#############################################################################
###
#############################################################################
### EMBEDDED FLAGS FOR NQS
### SUBMIT WITH: qsub [-q <queue>] xmessy_mmd
### SYNTAX: \#\@\$\-
### NOTE: currently deactivated; to activate replace '\#\%\$\-' by '\#\@\$\-'
### NOTE: An embedded option can remain as a comment line
### by putting '#' between '#' and '@$'.
#############################################################################
################# shell to use
#%$-s /bin/sh
################# export all environment variables to job-script
#%$-x
################# join standard and error stream (oe, eo) ?
#%$-eo
################# time limit
#%$-lT 2:00:00
################# memory limit
#%$-lM 4000MB
################# number of CPUs
#%$-c 6
################# send an email at end of job
### #%$-me
### #%$-mu $USER@mpch-mainz.mpg.de
################# no more NQS options below
#%$X-
#############################################################################
###
#############################################################################
### EMBEDDED FLAGS FOR LSF AT GWDG / ZDV Uni-Mainz / HORNET @ U-Conn
### SUBMIT WITH: bsub < xmessy_mmd
### SYNTAX: #BSUB
#############################################################################
### ################# queue name
### #BSUB -q gwdg-x64par ### GWDG
### #BSUB -q economy ### Yellowstone at UCAR
### #BSUB -q small ### Yellowstone at UCAR
### #BSUB -q atmosphere ### U-Conn HORNET
### ################# wall clock time
### #BSUB -W 5:00
### ################# number of CPUs
### #BSUB -n 256
### #BSUB -n 64
### ################# MPI protocol (do NOT change)
### #BSUB -a mvapich_gc ### GWDG
### ################# special resources
### #BSUB -J xmessy_mmd ### GWDG & ZDV & U-Conn
### #BSUB -app Reserve1900M
### #BSUB -R 'span[ptile=64]'
### #BSUB -M 4096000
### #BSUB -R 'span[ptile=4]' ### yellowstone
### #BSUB -P P28100036 ### yellowstone
### #BSUB -P UCUB0010
### ################# log-file
### #BSUB -o %J.%I.out.log
### #BSUB -e %J.%I.err.log
################# mail at start (-B) ; job report (-N)
### #BSUB -N
### #BSUB -B
#################
### NOTES: 1) set LSF_SCRIPT always to exact name of this run-script
### 2) this run-script must reside in $BASEDIR/messy/util
### 3) BASEDIR (below) must be set correctly
LSF_SCRIPT=xmessy_mmd
#############################################################################
#############################################################################
### USER DEFINED GLOBAL SETTINGS
#############################################################################
### NAME OF EXPERIMENT (max 14 characters)
EXP_NAME=ELKEchamOnly
### WORKING DIRECTORY
### (default: $BASEDIR/workdir)
### NOTE: xconfig will not work correctly if $WORKDIR is not $BASEDIR/workdir
### (e.g. /scratch/users/$USER/${EXP_NAME} )
# WORKDIR=
# NOTE the experiment folder might not exist yet
WORKDIR=/scratch/b/b309253/${EXP_NAME}
### START INTEGRATION AT
### NOTE: Initialisation files ${ECHAM5_HRES}${ECHAM5_VRES}_YYYYMMDD_spec.nc
### and ${ECHAM5_HRES}_YYYYMMDD_surf.nc
### must be available in ${INPUTDIR_ECHAM5_SPEC}
START_YEAR=2019
START_MONTH=01
START_DAY=01
START_HOUR=00
START_MINUTE=00
### STOP INTEGRATION AT (ONLY IF ACTIVATED IN $NML_ECHAM !!!)
STOP_YEAR=2019
STOP_MONTH=01
STOP_DAY=02
STOP_HOUR=00
STOP_MINUTE=00
### INTERVAL FOR WRITING (REGULAR) RESTART FILES
### Note: This has only an effect, if it is not explicitely overwritten
### in your timer.nml; i.e., make sure that in timer.nml
### IO_RERUN_EV = ${RESTART_INTERVAL},'${RESTART_UNIT}','last',0,
### is active!
### RESTART_UNIT: steps, hours, days, months, years
RESTART_INTERVAL=1
RESTART_UNIT=months
NO_CYCLES=9999
### SET VARIABLES FOR OASIS3-MCT SETUPS
### Note: this has only an effect, if they are used in the namelist files
### TIME STEP LENGTHS OF BASEMODELS [s]
#COSMO_DT[1]=120
#CLM_DT[2]=600
### INVERSE OASIS COUPLING FREQUENCY [s]
#OASIS_CPL_DT=1200
### settings for namcouple
### Note: If CPL_MODE not equal INSTANT, then LAG's have to be set
### to time step of each instance and oasis restartfiles have
### to be provided in INPUTDIR_OASIS3MCT.
#OASIS_CPL_MODE=INSTANT # AVERAGE, INSTANT
#OASIS_LAG_COSMO=+0 # ${COSMO_DT}, +0
#OASIS_LAG_CLM=+0 # ${CLM_DT}, +0
# Set number of COSMO output dirs for COSMO-CLM/MESSy simulations
# COSMO_OUTDIR_NUM=7
### CHOOSE SET OF NAMELIST FILES (one subdirectory for each instance)
### (see messy/nml subdirectories)
NML_SETUP=MECOn/ELK
### OUTPUT FILE-TYPE (2: netCDF, 3: parallel-netCDF)
### NOTES:
### - ONLY, IF PARALLEL-NETCDF IS AVAILABLE
### - THIS WILL REPLACE $OFT IN channel.nml, IF USED THERE
OFT=2
### AVAILABLE WALL-CLOCK HOURS IN QUEUE (for QTIMER)
QWCH=8
### =========================================================================
### SELECT MODEL INSTANCES:
### - ECHAM5, mpiom, CESM1, ICON (always first, if used)
### - COSMO, CLM
### - other = MBM
### =========================================================================
MINSTANCE[1]=ECHAM5
#MINSTANCE[1]=ICON
MINSTANCE[2]=COSMO
#MINSTANCE[1]=blank
#MINSTANCE[1]=caaba
#MINSTANCE[1]=CESM1
#MINSTANCE[1]=import_grid
#MINSTANCE[1]=ncregrid
#MINSTANCE[1]=mpiom
MINSTANCE[3]=COSMO
#MINSTANCE[4]=COSMO
#MINSTANCE[2]=CLM
### =========================================================================
### SET MMD PARENT IDs (-1: PATRIARCH, -99: not coupled via MMD)
### =========================================================================
MMDPARENTID[1]=-1
MMDPARENTID[2]=1
MMDPARENTID[3]=2
#MMDPARENTID[4]=3
#MMDPARENTID[2]=-99
### =========================================================================
### PARALLEL DECOMPOSITION AND VECTOR BLOCKING
### =========================================================================
NPY[1]=32 # => NPROCA for ECHAM5, MPIOM, (ICON: only dummy)
NPX[1]=16 # => NPROCB for ECHAM5, MPIOM, (ICON: only dummy)
#NPY[1]=2 # => NPROCA for ECHAM5, MPIOM
#NPX[1]=1 # => NPROCB for ECHAM5, MPIOM
NVL[1]=16 # => NPROMA for ECHAM5
#NPY[2]=16
#NPX[2]=16
#NVL[2]=1 # => meaningless for COSMO
#NPY[3]=16
#NPX[3]=32
#NVL[3]=1
### =========================================================================
### BASEMODEL SETTINGS (e.g. RESOLUTION)
### =========================================================================
### .........................................................................
### ECHAM5
### .........................................................................
### HORIZONTAL AND VERTICAL RESOLUTION FOR ECHAM5
### (L*MA SWITCHES ECHAM5_LMIDATM AUTOMATICALLY !!!)
ECHAM5_HRES=T106 # T106 T85 T63 T42 T31 T21 T10
ECHAM5_VRES=L90MA # L19 L31ECMWF L41DLR L39MA L90MA
### HORIZONTAL AND VERTICAL RESOLUTION FOR MPIOM (IF SUBMODEL IS USED)
MPIOM_HRES=GR60 # GR60 GR30 Gr15 TP04 TP40
MPIOM_VRES=L20 # L3 L20 L40
### ECHAM5 NUDGING
### DO NOT FORGET TO SET THE NUDGING COEFFICIENTS IN $NML_ECHAM !!!
ECHAM5_NUDGING=.TRUE.
### NUDGING DATA FILE FORMAT (0: IEEE, 2: netCDF)
ECHAM5_NUDGING_DATA_FORMAT=2
### ECHAM5 AMIP-TYPE SST/SEAICE FORCING ?
#ECHAM5_LAMIP=.TRUE.
### ECHAM5 MIXED LAYER OCEAN (do not use concurrently with MLOCEAN submodel!)
#ECHAM5_MLO=.TRUE.
### .........................................................................
### ICON
### .........................................................................
### .........................................................................
### CESM
### .........................................................................
### HORIZONTAL AND VERTICAL RESOLUTION FOR CESM1
CESM1_HRES=ne16 # 1.9x2.5 4x5 ne16 ne30
CESM1_VRES=L26 # L26 L51
#OCN_HRES=gx1v6 # 1.9x2.5 => gx1v6; 4x5, ne16 => gx3v7
CESM1_ATM_NTRAC=3
#
NML_CESM_ATM=cesm_atm_${CESM1_HRES}${CESM1_VRES}.nml
###
\ No newline at end of file
This diff is collapsed.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment