[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Dedicated Scheduler Config to enable Parallel Jobs.



This aspect is one of the reasons I opted to use SCHEDD_HOST and SCHEDD_NAME on user machines to establish a single central schedd to which jobs are submitted. Due to the CHTC team's outstanding optimization efforts since then in partnership with CERN, this scales much, much better than it used to, to thousands or tens of thousands of queued jobs.

Maybe that would work in this case, to prevent you from having to log in to the DedicatedScheduler. Not sure how it would interact with Windows, but worth a look in any case.

Michael V. Pelletier
Information Technology
Digital Transformation & Innovation
Integrated Defense Systems
Raytheon Company

From: HTCondor-users [mailto:htcondor-users-bounces@xxxxxxxxxxx] On Behalf Of John M Knoeller
Sent: Wednesday, October 3, 2018 3:17 PM
To: HTCondor-Users Mail List <htcondor-users@xxxxxxxxxxx>
Subject: [External] Re: [HTCondor-users] Dedicated Scheduler Config to enable Parallel Jobs.

This is normal for parallel universe.  The reason is that the execute nodes must configured to respond to a single dedicated scheduler, so only jobs submitted to that scheduler will ever run. 

You would split your execute nodes up by configuring ½ of them to use schedd A as the dedicated scheduler, and and 1/2 to use schedd B as the dedicated scheduler.  Then you could submit jobs to either schedd A and schedd B, but those jobs would never be able to use more than ½ of the execute nodes. 

This is the same whether your schedd and/or execute nodes are Windows or Linux.

-tj