[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] maxidle for a dag with one node?

I do not believe processes of a cluster are supported in DAGs, at least they are not on a Windows platform.


From: Rob de Graaf <r.degraaf@xxxxxxxxxxxx>
To: Condor-Users Mail List <condor-users@xxxxxxxxxxx>
Date: 09/22/2011 04:20 AM
Subject: [Condor-users] maxidle for a dag with one node?
Sent by: condor-users-bounces@xxxxxxxxxxx

Hello list,

I have a job consisting of one cluster with several hundred thousand
processes. The individual processes use $(Process) as an argument. I
can't submit them all at once, so I made a DAG with one JOB node and
tried to use condor_submit_dag's -maxidle throttling capability.
According to the manual, each individual process counts as a job, so
this matches what I want to do, but doesn't seem to work; the entire
cluster is submitted regardless of what I set -maxidle to. I've also
tried -maxjobs just in case, but that does what it says and throttles
whole clusters, not the processes within.

Is there a way to throttle processes in a single-node DAG? I realize
that I could split the cluster into many single-process clusters and use
-maxjobs, but then I wouldn't be able to use $(Process) anymore. Ideally
I'd like to avoid having to generate that many submit files.


Condor-users mailing list
To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting

The archives can be found at: