|Hi Thomas, all,|
I can wholeheartedly recommend *not* to scale the jobs in the CE. This will only lead to wasting cores or memory, as others have pointed out.
Do so as a last resort, if CPUs are the âcurrencyâ users are billed by *and* you do not have enough memory.
As long as jobs are averaging around or below the available memory/core, PartionableSlots will naturally attract a mix of jobs to balance mem/core requirements. Simply put, there are only so many high-mem jobs to start before only low-mem jobs fit.Be aware you need *some* extra memory for this, or you end up with fragmentation similar to the Multi-Core/Single-Core problems.
Groups abusing the mem/core lenience still get penalised by having to wait longer if resources are scarce. So there is still incentive to send well-behaved jobs to you.
If you want to help things along, use a RANK that selects the Startds with the best memory/core ratio after a match. Our setup for this is explained briefly in .
In my experience, any policy that is based on the machine feature works best. E.g. if you have some machines with 2GB/core and some with 3GB/core (sooner or later you will) there is no point enforcing a global ratio.
In case you are worried about getting jagged leftovers or fragmentation, but have on average enough memory, consider to quantize requests into comfortable chunks . For example, we quantize memory to 512MB (versus default 128MB) with a minimum of 2GB to avoid very small memory requests from skewing the usage ratio and to enforce that a âstandardâ WLCG job always fits when a slot is freed.
 See Section 3.3 RemainderScheduling
 See MODIFY_REQUEST_EXPR_REQUESTMEMORY etc.
Description: S/MIME cryptographic signature