[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] BOSCO question



2017-08-31 15:52 GMT-04:00 Greg Thain <gthain@xxxxxxxxxxx>:
> On 08/29/2017 10:44 AM, Zhuo Zhang wrote:
>>
>> Hi Greg,
>>
>> Thanks for the reply. The reason that I am asking the job submission
>> manager is that we have three small HTCondor pools, we want something that
>> can submit jobs to multiple clusters simultaneously in order to fully
>> utilize the resources.
>
>
> HTCondor can do this itself, and there are several different ways to do so.
>
> If you want this to happen for all jobs submitted to any of the three pools,
> perhaps the easiest way is with flocking.
>
> When a condor_schedd flocks to a remote pool, any idle jobs that don't match
> in the local pool are sent to one or more remote pools. For each schedd that
> you want to flock to a remote pool, just set in that schedd's config file:
>
> FLOCK_TO = remotePool1
>
> and in the central manager of remotePool1 one, add the name of the schedd
> machine in the FLOCK_FROM line
>
> FLOCK_FROM = remoteSchedd
>
> There are several other ways to join pools together, but for a small number
> of pools under the same administrative control, I'd start with flocking.
>
>
> -greg
>

Hi Greg,

besides the fact that, as you are pointing out, HTCondor can do it by itself,
for the explanation, it sounds to me he is looking for some sort of
job submission framework, like glideinWMS or AutoPyFactory.
With the only problems that those two may be an overkill for this
particular case.

But it would be useful to understand the real need. I mean, are all
jobs always equal (in terms of executable and list of arguments)? Or
each job may be unique?

Just random thoughts.
Jose