[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Condor memory issue again



Matt,

Thanks for the explanation.

Hi all,
I want to confirm with you about the memory size exposed to Condor jobs.
My understanding is, the memory size exposed to Condor jobs equals to (MEMORY - RESERVED_MEMORY) if
MEMORY is defined in Condor's config file or (physical memory detected by Condor - RESERVED_MEMORY)
if MEMORY is not set. In both cases, RESERVED_MEMORY will be replaced by zero if it is not defined
in the config file. For example, if a machine's physical memory is 512MB, the value of MEMORY is set
as 396MB, and RESERVED_MEMORY is unset, the maximum memory the Condor jobs can use is 396MB.
Thanks.
Xuehai


condor does NOT enfoce any VM related splits...
A process startred by condor can use as much memory/disk etc as the OS
will give it.
In the case of memory this is almost always "as much as it wants to
the OS limit"

These values just alter what condor *reports* to the outside world.
If your jobs requirements settings indicate that it needs more than
the reported value it will not both using that machine...

I see. These values affects the machine selection while the users submit their jobs. Once the jobs get submitted, the jobs will use as much memory as it can get to the OS limit.


There was some talk a while back about monitoring this from the
startd/starter and kicking the job if it violated the rules but this
is not available yet (and may never be)

Good to know.

Xuehai