[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] CGroup in HTCondor



Dear ALL,

I'm adding the cgroup-based configuration in our HTCondor cluster.

If I didn't misunderstand, the "CGROUP_MEMORY_LIMIT_POLICY = soft" can allow the job using more available memory left on machine (got this from condor manual).

For my test, I just added this line "CGROUP_MEMORY_LIMIT_POLICY = soft" to STARTD's configuration and submitted a single job into the empty SCHEDD .

This hold reason was got: 
" Job has gone over memory limit of 3566 megabytes. Peak usage: 4446 megabytes. " 

What's weird is only one job still reaches the cgroup-mem limit.

Did I make a wrong configuration or miss something?

Cheers,
Xiaowei

P.S: 
Each slot on my machine has 3566MB physical memory and 4446MB virtual memory;
The condor version is 8.8.4;
The installed rpms are:
condor-8.8.4-1.el7.x86_64
condor-classads-8.8.4-1.el7.x86_64
condor-external-libs-8.8.4-1.el7.x86_64
python2-condor-8.8.4-1.el7.x86_64
condor-kbdd-8.8.4-1.el7.x86_64
condor-procd-8.8.4-1.el7.x86_64


NAME:Jiang Xiaowei
MAIL:jiangxw@xxxxxxxxxxxxxxx
TEL:010 8823 6024
DEPARTMENT:Computing Center of IHEP