[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Condor-users] Memory requirement versus partitionable slots
- Date: Fri, 14 May 2010 10:37:05 -0500
- From: Greg Langmead <glangmead@xxxxxxxxxxxxxxxxxx>
- Subject: [Condor-users] Memory requirement versus partitionable slots
I have a test cluster set up with partitionable slots. I also have a workflow that was written for static slots, so most of the jobs specify things like
Requirements = (Memory < 3000)
Requirements = (TotalMemory > 8000)
(the first is to keep small jobs off slots with large ram, and the second is to put big jobs on the big machines).
These requirements are not consistent with partitionable slots, since requirements are always tested against only the slots that exist at that moment. The first example will not match any of the partitionable slots since they each contain the entire machine ram. It will match dynamic slots but that's not good since those are by definition always busy. The second example will match partitionable slots, but you can't run on those, they are just blobs off of which to peel dynamic slots.
Does anyone know if I can configure condor to globally ignore Memory or TotalMemory requirements that are set in my submit files? Or do I have to dig into my workflow generation code and strip these out?
Senior Research Scientist
Language Weaver, Inc.