[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Why all jobs are sent to slots in the same machine?



David Hentchel wrote:
> To summarize, I am testing high-performance distributed DBMS servers in
> the Parallel universe with up to 100 concurrent, closely-integrated
> processes running on as many hosts.  I only ever want to allow 1 job

I suggest that HTCondor isn't the right tool for this. Database servers
aren't long-running processes. They're continuously running processes. A
configuration management tool like Puppet or Chef will probably serve
you much better than trying to make HTCondor do what you want.

In the mean time, you can limit the number of usable CPUs with
"NUM_CPUS" in each node's condor_config.local. Set to "NUM_CPUS = 1" to
limit the number of slots to 1. Then restart the condor_master on each node.

-- 
Rich Pieri <ratinox@xxxxxxx>
MIT Laboratory for Nuclear Science