[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Configuration




Joseph L. Kaiser wrote:


Hi,

We are interested in doing it this way because this is the setup we
have. There is a schedd running on a separate machine dedicated to
these users. What are the gotchas that I need to think about such as,
if one user submits 100 jobs, are other users's jobs starved of
processor time? Is there a smart way to set the priority so that user
jobs are essentially load-balanced?



There certainly is a possibility of unbalanced resource claims when you run things this way with multiple users submitting. In the development series (6.7.x) you can force Condor to break resource claims between jobs, which might help, but I haven't tested that in the context of a schedd bumping up against MAX_JOBS_RUNNING.


Also, from your previous missive, I am assuming that the
MAX_JOBS_RUNNING is set in the global config file on the schedd host,
correct?

I'd put MAX_JOBS_RUNNING in the local config file for the machine where the jobs are submitted. If your main config file (condor_config) isn't on a shared filesystem, then it doesn't matter whether you put it in the "global" or local config file.

--Dan

On Tue, 2004-11-02 at 13:45, Zachary Miller wrote:


On Tue, Nov 02, 2004 at 01:39:18PM -0600, Dan Bradley wrote:


Joe,

Support for this sort of group sharing policy is in active development. In lieu of the full solution, there are various ways to get almost what you want, with some undesired consequences.

One way is to pre-assign 100 machines to the group so that their jobs only run there.


another way (with a different set of gotcha's) would be to run a separate
schedd on a separate machine for these two users, and for that schedd set
MAX_JOBS_RUNNING to 100.


cheers, -zach




_______________________________________________
Condor-users mailing list
Condor-users@xxxxxxxxxxx
http://lists.cs.wisc.edu/mailman/listinfo/condor-users