[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Re: split processors among multiple jobs



On Fri, 18 Mar 2005 18:32:27 -0800, John Wheez <john@xxxxxxxxxx> wrote:
> ok i see what is happening..by using priority = -$(Process) all the
> frames in each job ina  cluster are being given an order in which to be
> computed...
> 
> What is really needed is teh ability to assign a priority to the
> cluster..and if two clusters have the same priority then each cluster
> will get some cpus.
> 
> the method below does split cpus between all clusters that a user
> submits but it does it in a nonintelligent fashion..for example..if i
> submit  cluster A & B at teh same time then the cpus will be split..but,
> if i enter a new cluster C five minutes after A & B..then the cpus will
> all go to cluster C until it's jobs have reached teh same process number
> as A or B.
> 
> What would be nice is if we could have the option to assign priorities
> to clusters and have condor use that priority to decide what percent of
> resources should go to that cluster. that way even if a cluster is
> submitted 5 minutes later..it will not suck up all the resources.

you are heading into more complex territory here

If you want to able to do what you describe you need to get some
common behaviour and that depends on some local optimizations, namely
the max number of processes even inside a cluster.

assumption: you will never have more than 1000 processes per cluster:

MyClusterPriority = X
priority = (MyClusterPriority * 1000) -$(Process)

So long as X is never greater than 2^31-1 / 1000 or less than 2^31 /
1000 then this will work fine and will allow you easy relative ranking
of clusters while interleaving of their respective processes.

Note that if you wish to have equal A and B clusters then submit C and
have it get a small amount of the system resources but not take all
then you would need to manually boost the priority of a few jobs as
necessary. You could try and get clever with making the priority decay
non linear and thus automatic but I doubt any effort in that area is
worth it compared to a simple script to 'accelerate' a few jobs in a
cluster.

Note that pushing a new job to the top of the queue may trigger a
preemption of an existing job which might not be optimal, I would
suggest any accelerator script should find out the lowest priority of
the currently running jobs and never exceed that...

Matt