[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Condor-users] jobs growing to different sizes after start: from small to "big" sizes unknown before hand



Hello everyone,

Can some one point me to a nice link/doc/etc describing an effective strategy dealing with situation when jobs submitted may grow their memory to different sizes unknown before hand, some small, but some can be very large, taking most of the node memory.

(it is "too hard" to estimate/predict the memory size a job can grow to )

In this case using requirement_memory and/or rank = Memory >= ... seems to be ineffective.

The environment: Vanilla universe, a cluster with participating Linux servers/blades as nodes, the files are on NFS, Condor version 7.8.1

Any relevant ideas/information would be appreciated.

--
Thanks,
Val