[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Is there a method to nice docker jobs?



On 04/28/2016 02:22 AM, Matthias Schnepf wrote:
Hallo

I run condor with the docker universe on our desktop cluster. User should work normal on their PC and when there are free resources HTCondor starts a docker container with the job. At the moment I suspend jobs when the LoadAvg reach a set value, so that the user has all resources on their PC when they need that. But this solution has a "large" response time about 1 min.

Hello Matthias:

Currently, when condor runs a docker universe job, it gives that job a number of cpu "shares", which is a way that cgroups controls how much cpu to allocate to runnable processes. The shares given to the job is 100 * Cpus. If your desktop users are running in a cpu cgroup, you could set their cpu shares proportionally higher, and when their processes are runnable, they would get proportionally more of the cpu on the system.

In general, though, if a batch job is cpu-bound, the default Linux scheduler tends to do a very good job of reducing the priority of that batch job below the priority of any interactive job. It is surprisingly easy to have several cpu-bound jobs running on a Linux desktop, and never notice any latency in interactive response.

-Greg