[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Ways to limit total job output?




On Thu, 28 Dec 2006, Erik Paulson wrote:

On Thu, Dec 28, 2006 at 02:07:39PM -0600, Steven Timm wrote:

I have a large cluster of execution machines on which I run
five VM's apiece, all of which share the same 250GB staging area.
Is there any mechanism within condor to limit the total amount
of disk I/O before the job is killed? I don't see anywhere that the
classad is keeping track of this quantity, just an initial check
between the disk that the job claims it needs and the actual disk
available to the VM.  Any ideas?  Obviously it would be nice
to have such a feature so the one rogue job doesn't kill the other 4.


There is a DiskUsage attribute in the job ad, and I believe that it is
updated through the lifetime of the job, and that the startd always has
the most recent number available to it. You could use it as part of a
PREEMPT/KILL expression.

Nope.  All the jobs in my queue are showing the same initial DiskUsage
of 10000 that they started with, whether running or not.  And
these jobs are running in the job's execute directory.  This is
true for several different schedd's, both condor 6.8.1 and condor 6.8.2.
What's the next idea?

Steve Timm



I don't think DiskUsage tracks all disk usage though, only what's in
the job's execute directory - so if you're writing to some sort of
scratch space that Condor isn't watching it doesn't help you.

-Erik
_______________________________________________
Condor-users mailing list
To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/condor-users

The archives can be found at either
https://lists.cs.wisc.edu/archive/condor-users/
http://www.opencondor.org/spaces/viewmailarchive.action?key=CONDOR