[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] Configuration
- Date: Fri, 05 Nov 2004 09:22:56 -0600
- From: Dan Bradley <dan@xxxxxxxxxxxx>
- Subject: Re: [Condor-users] Configuration
Joseph L. Kaiser wrote:
We are interested in doing it this way because this is the setup we
have. There is a schedd running on a separate machine dedicated to
these users. What are the gotchas that I need to think about such as,
if one user submits 100 jobs, are other users's jobs starved of
processor time? Is there a smart way to set the priority so that user
jobs are essentially load-balanced?
There certainly is a possibility of unbalanced resource claims when you
run things this way with multiple users submitting. In the development
series (6.7.x) you can force Condor to break resource claims between
jobs, which might help, but I haven't tested that in the context of a
schedd bumping up against MAX_JOBS_RUNNING.
I'd put MAX_JOBS_RUNNING in the local config file for the machine where
the jobs are submitted. If your main config file (condor_config) isn't
on a shared filesystem, then it doesn't matter whether you put it in the
"global" or local config file.
Also, from your previous missive, I am assuming that the
MAX_JOBS_RUNNING is set in the global config file on the schedd host,
On Tue, 2004-11-02 at 13:45, Zachary Miller wrote:
On Tue, Nov 02, 2004 at 01:39:18PM -0600, Dan Bradley wrote:
Support for this sort of group sharing policy is in active development.
In lieu of the full solution, there are various ways to get almost what
you want, with some undesired consequences.
One way is to pre-assign 100 machines to the group so that their jobs
only run there.
another way (with a different set of gotcha's) would be to run a separate
schedd on a separate machine for these two users, and for that schedd set
MAX_JOBS_RUNNING to 100.
Condor-users mailing list