I have used an advice from one colleague (Andrea Sartirana <sartiran@xxxxxxxxxxxx>
The idea is to use the USER_JOB_WRAPPER variable.
In your condor config file on the execute nodes (example
/etc/condor/config.d/<your config file>.conf), please add:
In /usr/local/user_job_wrapper.sh, in my case I have:
ulimit -Ss 16000000
ulimit -Hs 16000000
N.B.: When I used only 'ulimit -Hs 16000000', I had the error
message "limit: stack size: cannot modify limit: Invalid
argument" because by default on the exec nodes the soft and hard
limit were both set to 'unlimited'. Because the hard limit can't
be lowered while the soft limit remains unlimited, I had to
change the soft limit before the hard one.
There may be are better solutions, but the above seems to work.
Hope this will help,
Le 04/10/2018 13:54, Sean Crosby a écrit :
As I'm sure most of you are aware, there is a security
bug with the RHEL kernels (CVE-2018-14634) which
needs to be patched.
As there is no new kernel for RHEL 6 yet, the mitigation
is to reduce the stack size ulimit (ulimit -Hs 16000000)
I have tried adding the stack size ulimit to profile.d on
the worker node, but jobs run via HTCondor are not picking
this value up.
Does anyone have an easy way to ensure jobs (and their
child processes) pick up the new stack size hard limit?
Jobs are being submitted via ARC-CE, if that helps.
Research Computing | CoEPP | School of
System Administrator | HPC | Research
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
You can also unsubscribe by visiting
The archives can be found at: