25.8 Configuration for PEcAn web interface

The config.php has a few variables that will control where the web interface can run jobs, and how to run those jobs. It is located in the /web directory and if you have not touched it yet it will be named as config.example.php. Rename it to ’config.php` and edit by folowing the following directions.

These variables are $hostlist, $qsublist, $qsuboptions, and $SSHtunnel. In the near future $hostlist, $qsublist, $qsuboptions will be combined into a single list.

$SSHtunnel : points to the script that creates an SSH tunnel. The script is located in the web folder and the default value of dirname(__FILE__) . DIRECTORY_SEPARATOR . "sshtunnel.sh"; most likely will work.

$hostlist : is an array with by default a single value, only allowing jobs to run on the local server. Adding any other servers to this list will show them in the pull down menu when selecting machines, and will trigger the web page to be show to ask for a username and password for the remote execution (make sure to use HTTPS setup when asking for password to prevent it from being send in the clear).

$qsublist : is an array of hosts that require qsub to be used when running the models. This list can include $fqdn to indicate that jobs on the local machine should use qsub to run the models.

$qsuboptions : is an array that lists options for each machine. Currently it support the following options (see also [Run Setup] and look at the tags)

array("geo.bu.edu" =>
    array("qsub"   => "qsub -V -N @NAME@ -o @STDOUT@ -e @STDERR@ -S /bin/bash",
          "jobid"  => "Your job ([0-9]+) .*",
          "qstat"  => "qstat -j @JOBID@ || echo DONE",
          "job.sh" => "module load udunits R/R-3.0.0_gnu-4.4.6",
          "models" => array("ED2"    => "module load hdf5"))

In this list qsub is the actual command line for qsub, jobid is the text returned from qsub, qstat is the command to check to see if the job is finished. job.sh and the value in models are additional entries to add to the job.sh file generated to run the model. This can be used to make sure modules are loaded on the HPC cluster before running the actual model.