[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] How to configure HTCondor so that one slot corresponds to one physical machine

You can do this by configuring

SLOT_TYPE_1 = 100%

This will result in the STARTD creating a single static slot, that advertises all of the CPUS, Memory, and Disk that it detects.


-----Original Message-----
From: HTCondor-users <htcondor-users-bounces@xxxxxxxxxxx> On Behalf Of Ben Cotton
Sent: Friday, June 21, 2019 9:21 AM
To: HTCondor-Users Mail List <htcondor-users@xxxxxxxxxxx>
Subject: Re: [HTCondor-users] How to configure HTCondor so that one slot corresponds to one physical machine

On Fri, Jun 21, 2019 at 9:12 AM èèå <xuhujun8000@xxxxxxx> wrote:

>     I would like to configure HTCondor so that each slot would correspond to one physical machine. Currently, the default behavior is that one slot corresponds to one core, e.g., if a physical machine has 10 cores, HTCondor will "assign" 10 slots to this machine and as a result, 10 jobs can be run on the same machine. What I want to do is HTConder only assign 1 slot to each machine, regardless how many cores/threads it can support.

If you want this to be done statically, you can set

   NUM_SLOTS = 1

on your execute nodes. The manual describes this as:

An integer value representing the number of slots reported when the
multi-core machine is being evenly divided, and the slot type settings
described above are not being used. The default is one slot for each
CPU. This setting can be used to reserve some CPUs on a multi-core
machine, which would not be reported to the HTCondor pool. This value
cannot be used to make HTCondor advertise more slots than there are
CPUs on the machine. To do that, use NUM_CPUS .

If you want to do this flexibly (so that jobs can request the number
of cores they need), see:

Ben Cotton
He / Him / His
Fedora Program Manager
Red Hat

HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting

The archives can be found at: