[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] specify scratch directory on computation machine



On Mon, Jun 02, 2014 at 07:58:59PM -0700, Jiande Wang - NOAA Affiliate wrote:
> Hi,
>   First let me provide basic information on the condor system I am using. Our
> condor system is composed by a head node and 10 identical computation nodes.
> Head node is only used for submit job. All these machines has access to share
> file system. My condor and dagman jobs are running fine with share file system.

I am assuming in the following that you are not using Condor's file transfer
mechanisms.


>   Now we want to take advantage of fast disk (SSD) on computation nodes, but
> these disks are not on shared file system. I read example 6 on page 30 (condor
> manual version 8.1.6), seems this is close to what I want. My question is:
> 
> (1) how do I specify directory path in the condor script? I want to use this
> directory on computation node to do I/O intensive computation. Example 6 shows
> how to specify output file in the argument, but not directory. My job will
> generate several hundred output files so I want to know if I can specify
> directory instead of individual file names.

There is an environment variable, "_CONDOR_SCRATCH_DIR", that is defined for
the job and should be set to a local directory.  Try a test job to see if this
is what you expect.  (If not, contact your Condor sysadmin)


> (2) will these files in this directory be deleted automatically or I have to
> clean them manually?

Anything you create in the scratch directory will be cleaned up automatically
when the job completes.


>  The executable and all input files are in shared file system, and I want to
> have all final output be put back on shared file system upon job finishing.
> Note all 10 computation nodes are identical.

Make sure your job places the final results back on the shared filesystem. The
scratch directory will be deleted after the job completes.


Cheers,
-zach