[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Max Jobs



Suchandra,

The default value of MAX_JOBS_SUBMITTED is the largest integer
supported on your platform. However, there are some constraints that
may prevent you from reaching that limit. I have seen ~50k jobs in a
queue before, but condor_q calls can get pretty sluggish at that
point.

The HTCondor wiki[1] says "Schedd requires a minimum of ~10k RAM per
job in the job queue. For jobs with huge environment values or other
big ClassAd attributes, the requirements are larger. " Technically,
you'll need more disk space with a larger job queue, but it's such a
small percentage of even the smallest disks these days that it's not
worth worrying about.

For our customers who use CycleServer to send jobs to schedulers, we
suggest setting the maximum queue size to be about 3 times the value
of MAX_JOBS_RUNNING. If you have something similar that buffers jobs,
then that seems reasonable. If you're only submitting directly to the
scheduler, then you will need to try different values to see what
works best for your use case.


Thanks,
BC

-- 
Ben Cotton
main: 888.292.5320

Cycle Computing
Leader in Utility HPC Software

http://www.cyclecomputing.com
twitter: @cyclecomputing