[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] how to limit number of cores for condor to use per machine
- Date: Mon, 01 Feb 2016 09:12:48 -0500
- From: Michael V Pelletier <Michael.V.Pelletier@xxxxxxxxxxxx>
- Subject: Re: [HTCondor-users] how to limit number of cores for condor to use per machine
From: Alex <wasteoff@xxxxxxxxx>
Date: 01/31/2016 03:05 PM
> Hello! Is there a way to tell condor to not use more than N cores
> machine (with more than N cores available) to execute a particular
> jobs? The actual problem is that each of my jobs requires a lot of
> so to get a better throughout it probably makes sense to prevent condor
> from executing more than so and so many jobs on different cores of
> same machine because then all of the machine's memory will be exhausted,
> causing the jobs to use the harddrive which is much much slower.
Since the memory is the constraining factor, you should
use request_memory in the submit description to specify the amount of physical
memory that each job should use, in megabytes.
If your jobs each need 10 gigabytes of physical memory,
for example, you'd say "request_memory = 10000" and then if your
machine has 64GB of physical memory installed, only six of those jobs will
run on that machine regardless of how many CPU cores it has. It works just
like the request_cpus directive, only in megabytes instead of cores.
This applies to disk space as well - if your job uses
the exec-node-local scratch space, then specifying request_disk in kilobytes
will reserve sufficient scratch space for the job and limit the number
of jobs based on that parameter regardless of CPU or memory utilization.
And of course, you can combine the "request_"
directives to accurately characterize each aspect of your job so that the
negotiator can match it to the best available machine.
The key to good resource allocation is accurate resource