On the HPCF, the qalter, qresub and qsub commands have been surrounded by a wrapper script, so that they provide a reasonable set of defaults for the HPCF systems and lock out some facilities that are known to cause trouble to the system or other users. Please do NOT attempt to bypass the checks, or use the qmon GUI command to submit or manipulate jobs.

The qsub and qalter ccommands have a local option -Q, which is recommended to select the logical queue for the job. As on the other HPCF systems, its value is made up of a letter and a number, with the former specifying the time the job may run for and the latter the number of CPUs it may use. This may have values:

Generally, the s96 logical queue is the best one for production work, but users with heavily memory- and communication-bound jobs may get better results out of the s64 logical queue. Similar remarks apply to all of the 6xN versus 4xN logical queues. The x84 logical queues are mainly for people doing performance analysis, as they have exactly the same number of CPUs on each board, where the x96 ones (and, to some extent, the x96 ones) don't.

For example, `qsub -Q u4 quick_job' or `qsub -Q s96 solve_universe'.

The actual number of CPUs allocated is 6, 12, 24, 48 or 96, with logical queues like t8 actually reserving 12 CPUs; if your program is CPU bound, you should use these numbers. The logical queues with 4, 8, 16, 32 and 64 CPUs use only the `main' CPUs and may provide 50% more memory bandwidth and memory per CPU; if your program is memory bound, you should use these numbers (e.g. t12 not t8). If in doubt, try both and use the one with the smaller wall-clock time, or ask for advice.

The logical queues are simple parallel environments with an associated environment variable. If you use the -pe option, you must set the number of processors to the same as in the environment name, and must use the number of CPUs actually allocated. You can then set the number of CPUs you intend to use with the HPCF_SLOTS environment variable. For example, `-pe s96 96' is equivalent to `-Q s96' and `-pe t48 48 -v HPCF_SLOTS=32' is equivalent to `-Q t32'. You are advised to use the -Q, form, for simplicity.

There is a pair of local options, -priority and -nopriority that may be used to select the priority (`charged') mechanism or deselect it. The default will usually be priority, if you have access to it; unfortunately, it is not possible to set the default yourself. While you can set the underlying parameter yourself (it is the Gridengine project, set by the -P argument), you are not advised to, as we may need to fiddle with the project names and other aspects.

The -e, -i and -o options must not specify remote hostnames, because the HPCF uses a common user directory model and jobs will be run on whatever machines are available. The -w option may be set to only e or v, mainly because no other settings make sense on the HPCF.

The -ac, -A, -c, -dc, -display, -hard, -inherit, -l, -m, -M, -masterq, -noshell, -nostdin, -ot, -P, -q, -sc, -soft and -u options must not be used. Note that checkpointing would be a useful feature if it worked and did not cause major trouble, but that has not been the case on any system at any time in the past 40 years.

If the command option -hpcf_dryrun is specified, the script will print out the expanded command that it would have called and the environment that it would use, and do nothing. This is unlikely to be of much use to users.

None of the environment variables SGE_ROOT, SGE_CELL, COMMD_PORT or COMMD_HOST may be used. The first should be set up correctly on login, and the others are not supported.