[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] Affinity of interactive jobs for a particular node?
- Date: Wed, 05 Aug 2015 20:42:04 +0000
- From: Nathan Smith <sminatha@xxxxxxxx>
- Subject: Re: [HTCondor-users] Affinity of interactive jobs for a particular node?
Very good, thanks for the tip. I found that in our environment this was set to:
(RemoteOwner =?= UNDEFINED) * (KFlops - SlotID - 1.0e10*(Offline=?=True))
I also noticed using condor_status that we have highly variable KFlops within our environment, and that the distribution of jobs indeed matches the rank of kflops (assuming the the machine classad matches the requirements of the job).
Any tips for smoothing the KFlops value when used to calculate NEGOTIATOR_POST_JOB_RANK ?
Research Systems Engineer
Advanced Computing Center
Oregon Health & Science University
From: HTCondor-users [htcondor-users-bounces@xxxxxxxxxxx] on behalf of Todd Tannenbaum [tannenba@xxxxxxxxxxx]
Sent: Tuesday, August 04, 2015 1:44 PM
To: HTCondor-Users Mail List
Subject: Re: [HTCondor-users] Affinity of interactive jobs for a particular node?
On 8/3/2015 10:42 AM, Nathan Smith wrote:
> We have observed that htcondor appears to select a favored machine for interactive jobs. The result is that we'll have multiple interactive jobs assigned to a single machine, even as htcondor is matching non-interactive jobs in a more distributed manor. Is there anything which can be done to avoid clumping interactive jobs on a single node?
> We're using HTcondor 8.0 branch, and have partitionable slots configured.
Perhaps just changing NEGOTIATOR_POST_JOB_RANK
(see http://goo.gl/ZHnPhC) in your condor_config on your central
manager machine (and doing a condor_reconfig of course) to something
NEGOTIATOR_POST_JOB_RANK = random(10000)
There are fancier things done by default in more recent versions of
HTCondor, but I think the above may do the trick for you in v8.0.x..
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
You can also unsubscribe by visiting
The archives can be found at: