Thanks Todd. After your comment I found the following link with details:Âhttps://stackoverflow.com/questions/56650579/why-should-i-close-all-file-descriptors-after-calling-fork-and-prior-to-callinI was trying to understand further how htcondor is using FDs by submitting a batch of 3k jobs. I reduced the limit of open files 100 (soft) and 2k (hard). I thought maybe I would not be able to run more than 2k jobs, I did see 3k jobs running.ÂNumber of file handles increased by approx 60k and reduced to 21k after removing the jobs.Â# cat /proc/sys/fs/file-nr
21568 0 6573632# cat /proc/sys/fs/file-nr
81472 0 6573632Enabling logging doesn't show me too many FDs used by condor.ÂSCHEDD_DEBUG = D_FDS
SHADOW_DEBUG = D_FDS
SHARED_PORT_DEBUG = D_FDSBasically I am trying to understand: where condor uses FD? It can help me to answer what limits condor can hit if we don't bump the value of descriptors.ÂThanks & Regards,Vikrant AggarwalOn Mon, Mar 8, 2021 at 10:22 PM Todd L Miller <tlmiller@xxxxxxxxxxx> wrote:> Finally able to get the parameters due to which it was happening but didn't
> understand why it's happening.
Â Â Â Â IIRC, HTCondor closes (almost?) all FDs after fork()ing* but
before exec()ing the shadow.Â There was not, until relatively recently, a
way to close all the FDs associated with a process; you had to make a
system call for each FD.Â When you have to close 102,400 FDs, that's a lot
of system calls, and it takes a while.
*: On Linux, HTCondor actually calls clone().
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
You can also unsubscribe by visiting
The archives can be found at: