[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] Looking for suggestions for inhomogeneous pool
- Date: Thu, 24 Nov 2011 13:47:07 +0100
- From: Steffen Grunewald <Steffen.Grunewald@xxxxxxxxxx>
- Subject: Re: [Condor-users] Looking for suggestions for inhomogeneous pool
On Thu, Nov 24, 2011 at 01:20:35PM +0100, Lukas Slebodnik wrote:
> On Thu, Nov 24, 2011 at 01:03:32PM +0100, Steffen Grunewald wrote:
> > On Thu, Nov 24, 2011 at 10:48:33AM +0100, Steffen Grunewald wrote:
> > > But I always get confused by the Target/My prefixes, and again this time I
> > > suspect I got it wrong - parallel universe jobs, with a
> > >
> > > NEGOTIATOR_PRE_JOB_RANK=1000000000 + 1000000000 * (TARGET.JobUniverse == 11) * TotalCpus - 1000 * Memory
> > Replacing the "==" with a "=?=" and a series of reconfigs and restarts somehow
> > fixed the issue that "low end" machines were selected.
> > Now, a different problem shows up.
> > 1. Not all output=out.$(NODE) files get written (2 of 8 missing)
> > 2. Each multi-core machine (single dynamic slot) gets only one MPI node
> request_cpus = TARGET.Cpus
> Is this what you need?
(a) limit me to one machine (if "queue 1") and
(b) result in a single MPI node which probably doesn't know about multiple cores?
What about a heterogeneous pool (say, 1 big machine with 16 cores, some medium-
sized ones with 8 cores, some small ones with 2 cores - and a job consisting
of about 100 nodes/threads)? Of course there's OpenMP, but I cannot rely on it.
> I let answer to the first question for others :)