From: Alex <wasteoff@xxxxxxxxx>
Date: 01/31/2016 03:05 PM
> Hello! Is there a way to tell condor to not use more than N cores per
> machine (with more than N cores available) to execute a particular set of
> jobs? The actual problem is that each of my jobs requires a lot of memory,
> so to get a better throughout it probably makes sense to prevent condor
> from executing more than so and so many jobs on different cores of the
> same machine because then all of the machine's memory will be exhausted,
> causing the jobs to use the harddrive which is much much slower.
Since the memory is the constraining factor, you should use request_memory in the submit description to specify the amount of physical memory that each job should use, in megabytes.
If your jobs each need 10 gigabytes of physical memory, for example, you'd say "request_memory = 10000" and then if your machine has 64GB of physical memory installed, only six of those jobs will run on that machine regardless of how many CPU cores it has. It works just like the request_cpus directive, only in megabytes instead of cores.
This applies to disk space as well - if your job uses the exec-node-local scratch space, then specifying request_disk in kilobytes will reserve sufficient scratch space for the job and limit the number of jobs based on that parameter regardless of CPU or memory utilization.
And of course, you can combine the "request_" directives to accurately characterize each aspect of your job so that the negotiator can match it to the best available machine.
The key to good resource allocation is accurate resource requests.
Â Â Â Â -Michael Pelletier.
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
You can also unsubscribe by visiting
The archives can be found at: