[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] concurrency_limits question



On 06/24/2014 10:33 AM, Marc W. Mengel wrote:

Can you use
   condor_chirp set_job_attr
to change your "concurency_limits" to say you are done with
a shared resource, and/or that you have picked it back up
again?

Marc Mengel <mengel@xxxxxxxx>

So to the first apporoxmation, answering my own question, it appears the
answer is "No." -- to wit, if I have a resource named "10" with a limit
of 10 jobs, and I submit a bunch of wimpy jobs with a

concurrency_limits=10

and a  script like:

--------------------
#!/bin/sh

duration=120

sleep $duration

condor_chirp set_job_attr ConcurrencyLimits 'none'

sleep $duration
-------------------

If I look with
  condor_q -long -attributes ClusterId,ProcID,ConcurrencyLimits
I see some jobs with
  ConcurrencyLimits = none
and some with
  ConcurrencyLimits = "10"
(i.e. 5 of each), and yet running
  condor_userprio -l | grep Concurrency
still says:
  ConcurrencyLimit_10 = 10.000000
which is to say, the limit check is not checking the ClassAd values and counting, it is keeping track with a separate counter or something.

Is there any way to set Condor so it will count the ClassAd values
periodically to fix the resource counters? Then jobs could release resources they need only at job startup, and not have them tied up for the duration of the job...

Marc Mengel <mengel@xxxxxxxx>