[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] Submission of large numbers of jobs
- Date: Wed, 11 Jul 2012 09:30:40 +0200
- From: Martin Kudlej <mkudlej@xxxxxxxxxx>
- Subject: Re: [Condor-users] Submission of large numbers of jobs
Dear Mr. Candler,
On 07/10/2012 04:19 PM, Brian Candler wrote:
If one has a large number of jobs to submit - say 100,000 jobs - what is the
recommended way of doing this? Can simply submitting that number of jobs
cause problems? These jobs will take their input from a shared filesystem.
From what I've read, each job will take ~10KB of RAM in schedd, so 100K jobs
would be about 1G of RAM just for the job queue. If I can afford that, is
there anything else to worry about?
You can configure more than one scheduler.
Also you should consider to do it on 64 bit arch. because of memory allocation by one daemon.
MRG/Grid Senior Quality Assurance Engineer
Red Hat Czech s.r.o.