[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] Are evicted jobs memory requirements automatically adjusted?



When a job gets suspended and evicted it's memory requirements adjusted by the actual values measured during it's execution?

For example:
1. I job with "request_memory = 500" is being submitted.
2. The job gets assigned to a node, starts running.
3. It allocates 2000MB.
4. The job gets suspended and evicted (reason does not really matter in this example)
5a. Job gets resubmitted to another node with memory requirement of 500MB (as requested by the user)
5b. Job gets resubmitted to another node with memory requirement of 2000MB (as measured during previous execution)

So which one is it 5a or 5b?

If 5a how can I achieve 5b (increasing the memory requirements by the maximum memory allocation from the previous failed execution)?

This is especially important when using policy described here: https://htcondor-wiki.cs.wisc.edu/index.cgi/wiki?p=HowToLimitMemoryUsage

Thanks in advance!

Best,
Chris