[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] Condor server requirements
- Date: Fri, 18 Feb 2005 00:49:18 -0600
- From: Erik Paulson <epaulson@xxxxxxxxxxx>
- Subject: Re: [Condor-users] Condor server requirements
On Thu, Feb 17, 2005 at 03:26:05PM -0500, Ian Chesal wrote:
> > Each running job has an associated shadow process running on
> > the submit machine for the time the job is running...each one
> > consumes a small but non trivial amount of resources.
> Has anyone on the condor team stress tested this? What's the observed
> maximum number of running and queued jobs a single 6.7.x schedd can
> handle on say a Linux machine. If the machine was really beefy (say dual
> 3GHz Xeon's, 3GB+ of RAM, 100Mbit fiber NIC) -- what kind of concurrency
> could I expect?
There is no hard-and-fast rules for what a schedd can manage. My rule
of thumb is usually based on how many jobs are being submitted at any
one time, and how many jobs are completing at any one time. If it's
only one job completing every few minutes, and the schedd isn't getting
hammered with condor_q requests or condor_submits, there's no reason a
schedd can't manage hundreds or even over a 1000 running jobs, so long
as you have enough free file descriptors.
> The idea of moving my entire site to using a single schedd (or very few
> schedd's) has been running through my head as of late.
We've found that it's much easier to manage a bunch of schedds with 50-100
running jobs than it is with 1 schedd with 1000 running jobs. Condor
is very much like Napster - there's one central server for a bit of the
process, but then the rest of it is peer-to-peer.