[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] dealing with low memory



Neil,

I would go with the Dynamic slots suggestion Lans provided. In
addition, I would hold any job which goes over their request_memory
setting...







On Mon, Sep 13, 2010 at 1:46 PM, Lans Carstensen
<Lans.Carstensen@xxxxxxxxxxxxxx> wrote:
> Adding that requirement in this particular case would make those 1.2G jobs
> avoid that 8G system altogether given your static slot configuration.  Is
> that what you were looking for, or were you trying to figure out how to best
> use your 8 core 8G system?
>
> Partitionable and dynamic slots provide the most flexible alternative:
>
> http://www.cs.wisc.edu/condor/manual/v7.4/3_13Setting_Up.html#SECTION004139900000000000000
>
> You could reconfigure this host and others like it to a single partitionable
> slot, modify your SUBMIT_EXPRS to include a default for RequestMemory for a
> value that used to be what you'd manage to (like 1024), and then request
> exactly the right amount of memory needed for these jobs (and others going
> forward).  This also lets you differentiate  jobs capable of using multiple
> cores from those that can't.
>
> To manage the jobs within your slots you can create a job wrapper that sets
> rlimit's based on memory:
>
> http://www.cs.wisc.edu/condor/manual/v7.4/3_13Setting_Up.html#SECTION0041313000000000000000
>
> ...although there are several other policy alternatives if you want a less
> rigid slot boundary.
>
> -- Lans Carstensen
>
> Erik Erlandson wrote:
>>
>> One approach is to add a job requirement to your submission file that
>> takes memory into account, for example:
>>
>> requirements = (Memory > 1200)
>>
>> If you have an expectation of how much memory each job will use, another
>> approach would be to declare partitionable slots, and include expected
>> memory use in the job submissions.
>> -Erik
>>
>>
>> On Mon, 2010-09-13 at 11:12 -0400, Neal Becker wrote:
>>>
>>> I have an 8-core machine with 8G memory.  Trying to run 8 jobs, each
>>> using 1.2G is giving bad swapping.
>>>
>>> How can I prevent this?  Should I reconfigure condor setting
>>> MAX_NUM_CPUS, or is there some other way to limit max # of running condor
>>> jobs when I submit these?
>>>
>>> _______________________________________________
>>> Condor-users mailing list
>>> To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
>>> subject: Unsubscribe
>>> You can also unsubscribe by visiting
>>> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>>>
>>> The archives can be found at:
>>> https://lists.cs.wisc.edu/archive/condor-users/
>>
>>
>> _______________________________________________
>> Condor-users mailing list
>> To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
>> subject: Unsubscribe
>> You can also unsubscribe by visiting
>> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>>
>> The archives can be found at:
>> https://lists.cs.wisc.edu/archive/condor-users/
>
> _______________________________________________
> Condor-users mailing list
> To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
> subject: Unsubscribe
> You can also unsubscribe by visiting
> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>
> The archives can be found at:
> https://lists.cs.wisc.edu/archive/condor-users/
>