[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Condor-users] Customizing the environment the job runs in



Hi,

I am investigating what I can and cannot do to customize the environment that my condor jobs execute in.

I have jobs coming in via some gateway. The gateway takes care of authentication, generates a Condor command file and submits it to condor. The job goes to the compute node and before it runs some environment components need to be set up;

Chroot the environment
Mount key directories via a file system capable of remapping UID/GID (eg. SMB/CIFS)
Create any necessary local links to files on the gateway node (the gateway software allows remote reading of some state files)
Allow the job to execute


The jobs are running in the vanilla universe but the UID/GID remapping is done on the fly. Most of the work is done by a wrapper I am writing. However for some things the wrapper might be too late. The problem I am running into is that part of the startup of the condor job includes trying to open two files, one for STDIN, the other for STDERR. These two files may not be "writable" at the time condor checks them, but they will be at some point before the job starts up. The problem is the jobs may fail to start since these files are not readable before my wrapper runs.

One idea I had was to turn off the "checking" of the stdin/stderr files. That is to just start the job even if they are not writeable since they will be later. Is this possible? The other option might be to "fake" the output files in the command script for condor and then try to reconnect things with the wrapper. That is not use absolute, but relative paths for the logs, then try to link them back to the logs that will actually be updated (and available externally through the gateway).

Can I turn off the output file checking?

Thanks,

Terrence