[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Limiting number of jobs of specific user to N per node


Related to this, one thing that I've always wished we could have is "default" machine resources, like we do with concurrency limits. So I could say something like this:


or even better


in the executing nodes and then each user would be automatically able to request:

request_user_myuser = 250

to only have four jobs running on each node.

How hard would that be to implement? I'd even think about having a go at it if it isn't too hard...



On 10/23/2017 03:51 PM, Greg Thain wrote:
On 10/23/2017 01:23 AM, Sean Crosby wrote:
Hi all,

We run jobs for the Belle experiment, and at the moment, their jobs are very IO intensive. If say, on a 12 core node, if all 12 cores are taken up by Belle jobs, the node suffers heavily from IO problems.

We'd like to limit the number of belle jobs on each node to be (say e.g.) 4, but keep the other 8 slots to be open for other user jobs.

What's the easiest way to do this?

Machine custom resources are the best way to do this. On the execute side, you can define how many resources that machine has, in some arbitrary unit, like this:


and in the job ad, the belle jobs should say

Request_belle = 1

which means "only match to machines which have 1 or more belle resources remaining, and consume 1 for the duration of my job".

HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting

The archives can be found at:

Dr. Joan Josep Piles-Contreras
ZWE Scientific Computing
Max Planck Institute for Intelligent Systems
(p) +49 7071 601 1750

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature