Are you using the custom classad python functions?Â I have a suspicion thereâs a memory leak there...
> On Mar 31, 2016, at 2:02 PM, Todd Tannenbaum <tannenba@xxxxxxxxxxx> wrote:
> On 3/31/2016 7:55 AM, L Kreczko wrote:
>> Dear experts,
>> I am trying to understand the schedd behaviour I witnessed today.
>> After sending 10k (bad) jobs to hold status, the RAM usage of the
>> condor_schedd process exploded (see attached png).
>> The job_queue log is now 9.3GB and contains all ClassAds of the held
>> jobs (I assume this is what is causing the RAM usage).
>> This was not the case when the jobs where idle. Is this behaviour expected?
>> Can I do something to prevent this from happening?
> Hi Luke,
> What HTCondor version / operating system are your using?
> Including version information in any incident report is always a good idea. :)
> Also, did you submit these 10k jobs via 10,000 invocations of condor_submit, or via one invocation with "queue 10000" ?
> Just to be sure we have the correct facts: you submitted the 10k jobs, and memory usage of the schedd was fine (i.e. less than 5 gig according to your graph).Â Then schedd memory usage exploded to 15GB+ as soon as you did the condor_hold, and most (all?) of the jobs you put on hold were previously in the idle state.
> Also, could you send the output of
>Â condor_schedd -v
>Â condor_config_val -dump QUEUE
> As you is there something you can do to prevent this:Â Once we have clarification on the above, we can investigate more (i.e. reproduce here) and hopefully give better advice.Â Until then I cannot precisely say what is going on, so my naive initial in the mean time advice would be run the latest release in whatever series you are using, and perhaps hold jobs a chunk at a time , i.e 500 at a time could be done like
>Â condor_hold -cons 'ClusterId > 5000 && Cluster <= 5500'
> Certainly HTCondor should be able to handle putting 10k jobs on hold in one go.Â As to what I think is going on: When you do condor_hold (or whatever) on a large group of jobs all at once, either all the jobs will go on hold, or none of the jobs will go on hold (i.e. database-style transactional processing).Â The schedd will store 10k changes to a transaction log in RAM... I wouldn't expect this log to take many gigs of ram however!Â But one improvement we've had in mind for a while (mainly for speed) is instead of having 10k transaction log entries would be to have one transaction log action that effectively gives a constraint like "all jobs" or whatever you gave to condor_hold...Â A downside of implementing this is it would not be forwards compatible - i.e. after upgrading to a new schedd with this feature, you may not be able to downgrade anymore (because the job_queue.log file may contains entries an old schedd would not understand).
> Absolute worst case you could shutdown HTCondor and remove everything in the $(SPOOL) directory, effectively flushing all your jobs to the bitbucket.Â Then before restarting you could set config knob SCHEDD_CLUSTER_INITIAL_VALUE to a number higher than your previous job id so that you don't repeat job id numbers, if you care about that.Â Of course it shouldn't have to come down to this extreme option, but I thought I'd mention it just in case everything is on fire and restarting HTCondor doesn't help.
> HTCondor-users mailing list
> To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
> subject: Unsubscribe
> You can also unsubscribe by visiting
> The archives can be found at:
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
You can also unsubscribe by visiting
The archives can be found at: