One approach is to add a job requirement to your submission file that
takes memory into account, for example:
requirements = (Memory > 1200)
If you have an expectation of how much memory each job will use, another
approach would be to declare partitionable slots, and include expected
memory use in the job submissions.
-Erik
On Mon, 2010-09-13 at 11:12 -0400, Neal Becker wrote:
I have an 8-core machine with 8G memory. Trying to run 8 jobs, each using
1.2G is giving bad swapping.
How can I prevent this? Should I reconfigure condor setting MAX_NUM_CPUS,
or is there some other way to limit max # of running condor jobs when I
submit these?
_______________________________________________
Condor-users mailing list
To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/condor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/condor-users/
_______________________________________________
Condor-users mailing list
To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/condor-users
The archives can be found at:
https://lists.cs.wisc.edu/archive/condor-users/