[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] quick DAG question...
- Date: Mon, 14 Mar 2005 17:21:11 -0600
- From: Alan De Smet <adesmet@xxxxxxxxxxx>
- Subject: Re: [Condor-users] quick DAG question...
Brian Gyss <brian.gyss@xxxxxxxxx> wrote:
> Is there any way to set up a DAG so that the child job runs on
> the same machine as the parent?
There is no great solution right now. It's a complex issue.
We're thinking about it, but here's what we have right now:
Bill mentioned one (change the machine's classad to note that
it's prepared for the child).
Another is to have the jobs explicitly specify which machine they
want (a bit crude).
Just one of the problems is that once the parent finishes by
default there is no reason Condor won't hand the machine off to
some random other job. Typically, you want the machine to remain
claimed for the child process.
One way to accomplish this is to submit condor_dagman itself as a
job. dagman runs on the execute machine. The individual jobs
would then run as universe=scheduler (coming soon:
universe=local) on the local (execute) machine. One of the down
sides is that your execute nodes need to be running schedds.
If you're interested in this route, you'll want to look at the
"-no_submit" option for condor_submit_dag. This generates the
submit file, but doesn't submit the job. You can modify the file
to change the universe from scheduler to vanilla. You'll need to
specify all of your input files, submit files, and the DAG itself
in transfer_input_files (or have them on a shared file system).
Alan De Smet Condor Project Research