[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] condor not queueing enough jobs
- Date: Wed, 8 Aug 2007 16:03:55 +0100
- From: "Kewley, J \(John\)" <j.kewley@xxxxxxxx>
- Subject: Re: [Condor-users] condor not queueing enough jobs
It is common practise to use 1 log file per job cluster, but separate
output and error files (for instance, see examples in http://www.cs.wisc.edu/condor/manual/v6.9/condor_submit.html )
If memory serves, this practise
is forced, I can't remember the exact reason, but is probably along the lines of:
* You need to write stuff to log file before Process is known
* There is stuff that needs writing to log file that is for the whole
job cluster rather than just for the Process.
> -----Original Message-----
> From: condor-users-bounces@xxxxxxxxxxx
> [mailto:condor-users-bounces@xxxxxxxxxxx]On Behalf Of Thomas Nelson
> Sent: Wednesday, August 08, 2007 3:44 PM
> To: condor-users@xxxxxxxxxxx
> Subject: [Condor-users] condor not queueing enough jobs
> I have the following file:
> +Group = "UNDER"
> +Project = "AI_ROBOTICS"
> +ProjectDescription = "gnugo on condor"
> Error = .condor2/error_w_$(Process).txt
> Input = .condor2/in_w_$(Process).txt
> Output = .condor2/out_w_$(Process).txt
> Log = .condor2/log_w_$(Process).txt
> Executable = /lusr/bin/python
> Universe = vanilla
> Arguments = aiharness.py
> notification = Error
> Queue = 8
> My problem is that only one job is submitted. Here's the output:
> Submitting job(s)
> WARNING: Log file
> is on NFS.
> This could cause log file corruption and is _not_ recommended.
> Logging submit event(s).
> 1 job(s) submitted to cluster 959.
> It creates out_w_0.txt, but none of the others. Can anyone help me
> understand what's going wrong?
> Condor-users mailing list
> To unsubscribe, send a message to
> condor-users-request@xxxxxxxxxxx with a
> subject: Unsubscribe
> You can also unsubscribe by visiting
> The archives can be found at: