[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Re: split processors among multiple jobs



On Mon, 21 Mar 2005, Matt Hope wrote:

> On Fri, 18 Mar 2005 18:32:27 -0800, John Wheez <john@xxxxxxxxxx> wrote:
> > ok i see what is happening..by using priority = -$(Process) all the
> > frames in each job ina  cluster are being given an order in which to be
> > computed...
> >
> > What is really needed is teh ability to assign a priority to the
> > cluster..and if two clusters have the same priority then each cluster
> > will get some cpus.
> >
> > the method below does split cpus between all clusters that a user
> > submits but it does it in a nonintelligent fashion..for example..if i
> > submit  cluster A & B at teh same time then the cpus will be split..but,
> > if i enter a new cluster C five minutes after A & B..then the cpus will
> > all go to cluster C until it's jobs have reached teh same process number
> > as A or B.
> >
> > What would be nice is if we could have the option to assign priorities
> > to clusters and have condor use that priority to decide what percent of
> > resources should go to that cluster. that way even if a cluster is
> > submitted 5 minutes later..it will not suck up all the resources.
>
> you are heading into more complex territory here
>
> If you want to able to do what you describe you need to get some
> common behaviour and that depends on some local optimizations, namely
> the max number of processes even inside a cluster.
>
> assumption: you will never have more than 1000 processes per cluster:
>
> MyClusterPriority = X
> priority = (MyClusterPriority * 1000) -$(Process)

Clever, but I believe currently priority must be an integer literal, not
an expression.

+----------------------------------+---------------------------------+
|            Jaime Frey            |  Public Split on Whether        |
|        jfrey@xxxxxxxxxxx         |  Bush Is a Divider              |
|  http://www.cs.wisc.edu/~jfrey/  |         -- CNN Scrolling Banner |
+----------------------------------+---------------------------------+