[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Jobs License Management



(inline)

Jason Stowe wrote:
Stuart,
This limiting is happening at matchmaking time, and lasts the life of
a job. This is why there is no preemption based upon decreasing limits
below currency usage.

Actually, that's not true. There are few technical reasons for why no preemption happens. The Negotiator already plays a central role in deciding preemptions in other cases. The real reason is more philosophical as to how Condor has worked in the past and continues to work in the future.


One thing that would be an important part to this feature would be to
allow Condor to account for Limits using more than just multiples of
1.
For I/0 connection or IO related limits, it would be useful and
probably trivial to add the ability to specify some format like:
concurrency_limits = <LIMIT_NAME>:<LIMIT_USED>, ....
The ":<LIMIT_USED>" would be optional and default to 1. But, if you
wanted to specify fractional numbers for jobs using limits you can.
E.g. rather than:
concurrency_limits = APPNAME, APPNAME, PLUGIN_B
you would have
concurrency_limits = APPNAME:2, PLUGIN_B

I toyed with this possibility, but left it it out for now. The current implementation allows for anything you would want to do with a X=#, at the expense of brevity.


This would enable more load-based limits like:
concurrency_limits = LOAD_LIMIT:1.25, APPNAME
and on other jobs:
concurrency_limits = LOAD_LIMIT:0.5

This kind of fractional specification would help increase the number
of applicable use cases for this feature.

You can already do this, just not briefly if you want significance out to the hundredths place.


Best,


matt

Thanks,
Jason


On Tue, Oct 14, 2008 at 11:22 PM, Stuart Anderson
<anderson@xxxxxxxxxxxxxxxx> wrote:
In the context of the new Concurrency Limit will it be possible for a
running job to drop a resource constraint when it is done with it, or
is it implicitly assumed that all jobs require their specified
resources for their entire lifetime?

The motivation for this is managing I/O resources where a typical work
flow is to launch a large number of jobs that each read in a large
amount data from a shared filesystem (or set of filesystems), and then
crunch on the data for a long time before outputing a relatively small
amount of results. It would be interesting to be able to hand out
tokens for filer access but then be able to return them after the I/O
intensive phase of each individual job is done.

Thanks.

--
Stuart Anderson  anderson@xxxxxxxxxxxxxxxx
http://www.ligo.caltech.edu/~anderson