[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] MaxVacateTime and KILLING_TIMEOUT seemingly not honored


I've been having a long standing issue with our Condor cluster that I have not been able to crack, primarily pertaining to jobs not being issued SIGKILL after having be allocated the time specified in MaxVacateTime.

Some background info: There are certain jobs that need to run corresponding with specific events that occur. In order to satisfy this, we have rank preemption set up for these jobs that get submitted under a specific user to have them start ASAP. I'm not 100% knowledgeable on the code being run, but the general idea is that these jobs will run until removed by other means (i.e. they will never exit of their own accord). This normally has just been done by issuing a condor_rm once the work they are doing has been deemed complete.

In more recent times, either due to changes in the host machines, condor configuration, or the code itself, the jobs will never get removed via condor_rm, and have to be killed locally on the execute host by issuing a KILLSIG to both the starter and child process.

The child process does not properly handle SIGTERM, and for reason beyond my scope, I cannot do much to change this on the code side. However, it seems strange to me that a SIGKILL does not seem to be sent after reaching the MaxVacateTime which is set to MaxVacateTime = 10 * $(MINUTE). Not only that, but the KILLING_TIMEOUT for the startd does not seem honored either, which at the default 30 seconds. Watching with a strace, it seems to confirm that the SIGKILLS are never issued in these cases.

I've tested it with scripts like

trap "echo 'do nothing'" SIGTERM
while :; do :; done

Which seems to work however, so I'm not sure. I've wondered if rank expressions prevent this from happening? Running as the user with rank preemption for the above script still seems to do the correct thing ultimately though.

Any thoughts or ideas to test would be greatly appreciated!