[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] confusion around new spool in 7.5.5



Peter,

Prior to 7.5.5, Condor created job spool directories if they didn't already exist whenever it launched a job.  Now, it only creates job spool directories when needed.  This means that jobs which do not spool input files and do not spool output files will not have a spool directory.

Upon upgrade, I would expect any jobs that already have spool directories (i.e. any running job) to still have spool directories (but moved into the new location).  Do you think that is not the case in your situation?

--Dan

On 2/18/11 10:26 AM, Peter Doherty wrote:
I upgraded to v7.5.5 and there's one thing I'm scratching my head over.

I used to have a SPOOL directory filled with directories with names like:
cluster15093481.proc0.subproc0.tmp/

According to the changelog I should now have dirs in the format of:
$(SPOOL)/<#>/<#>/cluster<#>.proc<#>.subproc<#>


But the thing is, I don't have anything.
my SPOOL just has:
job_queue.log
local_univ_execute
spool_version

I've got a few thousand jobs in the queue right now.
Where are the spool files?  I'm sure I'm looking in the correct directory.  I've tried to find them, but I can't.  I see a lot of lock files in $(TMP_DIR)

I believe the constant I/O of all the spool files was one of the bottlenecks of our Schedd, so if that's really been improved upon, I'm eager to see the effect, but from reading the changelog, the only different should have been subdirs for the spool to keep from hitting ext3 limits.

Thanks,
Peter
_______________________________________________ Condor-users mailing list To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a subject: Unsubscribe You can also unsubscribe by visiting https://lists.cs.wisc.edu/mailman/listinfo/condor-users The archives can be found at: https://lists.cs.wisc.edu/archive/condor-users/