[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] Scratch Directory problem
- Date: Mon, 27 May 2013 16:20:16 -0500
- From: Michael Fienen <mike@xxxxxxxxxxx>
- Subject: Re: [HTCondor-users] Scratch Directory problem
Many thanks for the help!
Indeed, adding "should_transfer_files=yes" did the trick! For some reason, I had removed this from a config file that I pass around from project to project.
On May 27, 2013, at 11:57 AM, Brian Candler <b.candler@xxxxxxxxx> wrote:
> On 27/05/2013 15:01, Michael Fienen wrote:
>> I'm running into a problem regarding temporary run directories. I am submitting jobs in Vanilla universe to a cluster of Linux Machines. For some reason, on one of them, rather than the jobs running in, for example, /var/lib/condor/execute/dir_<PID>/.…. where <PID> is a unique number tied to the process ID, in some cases, they end up running in roughly the same location they were submitted from (like /home/username/launch_dir)
> Have you set FILESYSTEM_DOMAIN to the same value on every machine? Then this is going to happen.
> You can fix this by putting
> FILESYSTEM_DOMAIN = $(FULL_HOSTNAME)
> in the config. Or, in every submit file, you can set "should_transfer_files = yes"
>> Relatedly, I hoped that I could just exclude a machine in requirements like (Target.Machine != "xxx.xxx.xxx.xxx") but that didn't work either.
> I think you need to use FQDN for that.
> Requirements = (Machine != "host.your.domain")
> But normally a better approach would be to classify your machines:
> HasWibble = True
> STARTD_EXPRS = HasWibble, $(STARTD_EXPRS)
> and then your job can say
> Requirements = HasWibble