[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Schedd RAM usage exploding after condor_hold of 10k jobs



Hi Brian,

No, we do not use the custom classad python functions.Â

On 31 March 2016 at 20:24, Brian Bockelman <bbockelm@xxxxxxxxxxx> wrote:
Hi Luke,

Are you using the custom classad python functions? I have a suspicion thereâs a memory leak there...

Brian

> On Mar 31, 2016, at 2:02 PM, Todd Tannenbaum <tannenba@xxxxxxxxxxx> wrote:
>
> On 3/31/2016 7:55 AM, L Kreczko wrote:
>> Dear experts,
>>
>> I am trying to understand the schedd behaviour I witnessed today.
>> After sending 10k (bad) jobs to hold status, the RAM usage of the
>> condor_schedd process exploded (see attached png).
>>
>> The job_queue log is now 9.3GB and contains all ClassAds of the held
>> jobs (I assume this is what is causing the RAM usage).
>> This was not the case when the jobs where idle. Is this behaviour expected?
>> Can I do something to prevent this from happening?
>>
>> Cheers,
>> Luke
>>
>
> Hi Luke,
>
> What HTCondor version / operating system are your using?
>
> Including version information in any incident report is always a good idea. :)
>
> Also, did you submit these 10k jobs via 10,000 invocations of condor_submit, or via one invocation with "queue 10000" ?
>
> Just to be sure we have the correct facts: you submitted the 10k jobs, and memory usage of the schedd was fine (i.e. less than 5 gig according to your graph). Then schedd memory usage exploded to 15GB+ as soon as you did the condor_hold, and most (all?) of the jobs you put on hold were previously in the idle state.
>
> Also, could you send the output of
>Â condor_schedd -v
> and
>Â condor_config_val -dump QUEUE
>
> As you is there something you can do to prevent this: Once we have clarification on the above, we can investigate more (i.e. reproduce here) and hopefully give better advice. Until then I cannot precisely say what is going on, so my naive initial in the mean time advice would be run the latest release in whatever series you are using, and perhaps hold jobs a chunk at a time , i.e 500 at a time could be done like
>Â condor_hold -cons 'ClusterId > 5000 && Cluster <= 5500'
>
> Certainly HTCondor should be able to handle putting 10k jobs on hold in one go. As to what I think is going on: When you do condor_hold (or whatever) on a large group of jobs all at once, either all the jobs will go on hold, or none of the jobs will go on hold (i.e. database-style transactional processing). The schedd will store 10k changes to a transaction log in RAM... I wouldn't expect this log to take many gigs of ram however! But one improvement we've had in mind for a while (mainly for speed) is instead of having 10k transaction log entries would be to have one transaction log action that effectively gives a constraint like "all jobs" or whatever you gave to condor_hold... A downside of implementing this is it would not be forwards compatible - i.e. after upgrading to a new schedd with this feature, you may not be able to downgrade anymore (because the job_queue.log file may contains entries an old schedd would not understand).
>
> Absolute worst case you could shutdown HTCondor and remove everything in the $(SPOOL) directory, effectively flushing all your jobs to the bitbucket. Then before restarting you could set config knob SCHEDD_CLUSTER_INITIAL_VALUE to a number higher than your previous job id so that you don't repeat job id numbers, if you care about that. Of course it shouldn't have to come down to this extreme option, but I thought I'd mention it just in case everything is on fire and restarting HTCondor doesn't help.
>
> Thanks
> Todd
>
> _______________________________________________
> HTCondor-users mailing list
> To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
> subject: Unsubscribe
> You can also unsubscribe by visiting
> https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
>
> The archives can be found at:
> https://lists.cs.wisc.edu/archive/htcondor-users/


_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/



--
*********************************************************
 Dr Lukasz Kreczko     ÂÂ
 Research Associate
 Department of Physics
 Particle Physics Group

 University of Bristol

 HH Wills Physics Lab
 University of Bristol
 Tyndall Avenue
 Bristol
 BS8 1TL


 +44 (0)117 928 8724Â
ÂÂ
 A top 5 UK university with leading employers (2015)
 A top 5 UK university for research (2014 REF)
 A world top 40 university (QS Ranking 2015)
*********************************************************