Hello list,I have a job consisting of one cluster with several hundred thousand processes. The individual processes use $(Process) as an argument. I can't submit them all at once, so I made a DAG with one JOB node and tried to use condor_submit_dag's -maxidle throttling capability. According to the manual, each individual process counts as a job, so this matches what I want to do, but doesn't seem to work; the entire cluster is submitted regardless of what I set -maxidle to. I've also tried -maxjobs just in case, but that does what it says and throttles whole clusters, not the processes within.
Is there a way to throttle processes in a single-node DAG? I realize that I could split the cluster into many single-process clusters and use -maxjobs, but then I wouldn't be able to use $(Process) anymore. Ideally I'd like to avoid having to generate that many submit files.
Thanks! Rob