[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Determine when all jobs in a cluster have finished?

DAGman is certainly an awesome tool...but if you have a java master process that is creating these submit files, I think it's over-kill.  I guess I could be missing something, but it seems like your master process should be like this:

In Java code:
1) create initial submit file and submit it (already doing this)
2) make a system call to condor_wait to wait for the initial job to finish
3) Have java check which data files came back, create submit files for post processing, and submit
4) Use condor wait to wait for those jobs (if necessary)

I just think that if you are using Java to create these scripts automatically and fire all this stuff off, that you might as well use that for the "dynamic" aspect.  I do this sort of thing in C++ quite a bit and, while I know DAGman and have used it, when i don't have crazy job dependencies it's much easier to just to a condor_wait as a system call in c++ and then go to setting up a post process script. Please let me know if I'm missing something or if you are familiar with condor_q and it's really just not what you want, but I think this should be considered.

On Wed, Jan 30, 2013 at 11:34 AM, Dimitri Maziuk <dmaziuk@xxxxxxxxxxxxx> wrote:
On 01/30/2013 12:27 PM, Dimitri Maziuk wrote:


Vars Workers$NUM WORKDIR="/workspace/jobs/$NUM" \

Dimitri Maziuk
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting

The archives can be found at: