Heavy ion Analysis Libriares
Loading...
Searching...
No Matches
Hal-jobs

application for job management

Parameters
argcnot used
argvarray of arguments
Returns

basic usage

This is a application to submit jobs from XML files. In principle this should be used in following way:

  1. use hal-jobs --prepare <batch_system> where batch_system specify the batch system used on cluster, curennlty following batch system are supported torque (pbs_single or pbs_array) and slurm (sbatch_singe or sbatch_array ). This script also creates directories for erros, logs etc.
  2. edit your xml configuration file - usually by modifying "job commands" section 3a. create your job scripts by hal-jobs --submit <xml_configuration_file> –stage=prepare, this command create job files that will be sent to batch system 3b. send your job scripts by hal-jobs --sumbit <xml_configuration_file> –stage=deploy
  3. alternatively instead of calling 3a and 3b you can prepare & submit you jobs by hal-jobs <xml_configuration_file>

structure of input file

settings - main settings of jobs

  1. submit - the submit command - the command used to submit job file
  2. submit_start the id of first job
  3. submit_end the id of last job
  4. shell - the shell, first line in script sent to the computing cluster

    job_options

Options of the jobs. In principle they are added to the scripts just like commands. Here use can also specify array attribute, if it set to "yes" only one job file is prepared and sent to the cluster - this is so called job array, overwrite many separate jobs after send. The size of array/number of jobs is defined by submit_start and submit_end .

job_commands

The lines that will be written to the job file,in principle they are commands executed by a job script..

Note
Some of job_commands should not be changed e.g. \ export JOB_ID_HAL=expr $SLURM_ARRAY_TASK_ID + $ONE in sbatch_array is the only way to get unique_id for each job in array

export option

hal jobs has the option --export= <data_file> –id=<id> –par=<par_id> returns the parameter number par_id for job number par_id from data_file. The data file might look like this.

=======_HAL_JOBS_INPUT_FILE_=======
NVAR 2 NJOBS 3
job_1 data_1
job_2 data_2
job_3 data_3

In this case we have two variables defined for 3 jobs, hal-jobs --export file –id=1 –par=2 returns data_2

special flags

There are some specjal flags, if they are present in xml file they will be replaced before creation of job file, following special flags are present:

  1. ${HAL_PWD} path to current directory
  2. ${HAL_START_ARRAY} value in submit_start
  3. ${HAL_END_ARRAY} value in submit_end
  4. ${HAL_JOB_FILE} relative path to job file(files) usually jobs/job_array (for arrays) or jobs/job_i for i-th file
  5. ${HAL_JOB_ID} the id for given file (only for sending many small jobs - doesn't work for an array
See also
HalJobs::CreateDummyTxtFile
HalJobs::CreateDummyXMLFile @HalJobs