Subject: Re: [Condor-users] maxidle for a dag with one node?
I do not believe processes of a cluster
are supported in DAGs, at least they are not on a Windows platform.
Rob de Graaf <r.degraaf@xxxxxxxxxxxx>
Condor-Users Mail List <condor-users@xxxxxxxxxxx>
09/22/2011 04:20 AM
[Condor-users] maxidle for a dag with
I have a job consisting of one cluster with several hundred thousand
processes. The individual processes use $(Process) as an argument. I
can't submit them all at once, so I made a DAG with one JOB node and
tried to use condor_submit_dag's -maxidle throttling capability.
According to the manual, each individual process counts as a job, so
this matches what I want to do, but doesn't seem to work; the entire
cluster is submitted regardless of what I set -maxidle to. I've also
tried -maxjobs just in case, but that does what it says and throttles
whole clusters, not the processes within.
Is there a way to throttle processes in a single-node DAG? I realize
that I could split the cluster into many single-process clusters and use
-maxjobs, but then I wouldn't be able to use $(Process) anymore. Ideally
I'd like to avoid having to generate that many submit files.