[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Condor-users] managing large numbers of a job's output and error files



Hi,

I have a dag that has 10,000 jobs associated with it.  It is submitted to a local condor pool with all disks on NFS.

The jobs all send output and error to their own files, using a variable I set.

output=logs/worker-$(var2).out
error=logs/worker-$(var2).err

I can run the job, and it works fine, except that I end up with 20,000 files in one directory.   I'd like to have something like this:

output=logs/$(dir1)/$(dir2)/worker-$(var2).out
error=logs/$(dir1)/$(dir2)/worker-$(var2).err

but when I tried a simple version of it, 

output=logs/pre/worker-$(var2).out

I got an error that says:

statfs(/home/srp/orca_scratch/srp12may30_46/logs/pre/worker-pre.log) failed: 2/No such file or directory
05/31/12 12:09:35 WARNING: can't determine whether log file home/srp/orca_scratch/srp12may30_46/logs/pre/worker-pre.log is on NFS.
05/31/12 12:09:35 DAGMan::Job:8001:ERROR: Unable to monitor log file for node A|ReadMultipleUserLogs:9004:Error getting file ID in monitorLogFile()|ReadMultipleUserLogs:9004:Error initializing log file /home/srp/orca_scratch/srp12may30_46/logs/pre/worker-pre.log|MultiLogFiles:9001:Error (2, No such file or directory) opening file  /home/srp/orca_scratch/srp12may30_46/logs/pre/worker-pre.log for creation or truncation
05/31/12 12:09:35 ERROR "Fatal log file monitoring error!
" at line 858 in file /slots/05/dir_34706/userdir/src/condor_dagman/job.cpp

I already have to create the "logs" directory before running the job.  Is there a way around having to create the "logs" directory and all the subdirectories below it before submitting the job?  In other words, is there a way to make condor create the directory structure the .out and .err files are being written to?

Thanks for any info you can provide,

Steve