[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] Are flocking jobs considered remote jobs?


We run into an issue today where our submit node run out of storage space, which killed schedd. It turns out that the culprit was a couple of job subdirectories under '/var/lib/condor/spool/' and it seems that these were submitted as flocking jobs to another pool. The manual states in various places that the SPOOL dir contains both input and output files from remote jobs and we wonder if that could be the case here? If so, a second question is whether flocking jobs can be configured so that thee outputs are written directly to the user's folder where they were submitted from, or do they always have to go through the SPOOL?

Of course, one solution is to move SPOOL to a different location on our end that will not run out of space, and just might end up doing that, but it would be good to know how it is supposed to work regardless.

Thanks in advance,



Jacek Kominek, PhD
University of Wisconsin-Madison
1552 University Avenue, Wisconsin Energy Institute 4154 Madison, WI
53726-4084, USA