[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] STARTD-based memory limit



On 06/02/2011 10:15 AM, Steven Timm wrote:

In my cluster I have been using a schedd-based method of
killing jobs that are using too much memory.

[root@fcdf1x1 local]# condor_config_val SYSTEM_PERIODIC_REMOVE
(NumJobStarts > 10) || (ImageSize>=2500000) || (JobRunCount>=1 &&
JobStatus==1 && ImageSize>=1000000)

But this has two weaknesses

One is that sometimes it can take
the shadow a long time to send the high memory value back to
the schedd so the schedd can act, and in the meantime the job grows
too fast and sucks up all ram on the node and starts killing other
processes.

The second one is that I have a diverse pool of nodes and
would like jobs running on the nodes with bigger memory to use it if
it is there.

So is there a way to evict jobs that use, (ImageSize*2>Memory)?
would you use the KILL or the PREEMPT function?

Steve Timm

Often policy evaluation is delegated to the Shadow. Maybe it's a bug that SYSTEM_PERIODIC_REMOVE is not.

Best,


matt