[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] On-the-fly DAGs?



Sounds like a good use for condor_wait.

When you give condor_wait a job's log file (log = htcondor-$(Cluster).log) it watches the file and will only exit when all the jobs in that log have completed.

So what you'll want to do is write a little script which runs a condor_wait on the pending job cluster and then submits your next job after condor_wait exits.

You could submit it as a "local" universe job so that the condor_wait that's sitting around doing nothing wouldn't be using a CPU slot.

	-Michael Pelletier.

-----Original Message-----
From: HTCondor-users [mailto:htcondor-users-bounces@xxxxxxxxxxx] On Behalf Of Vaurynovich, Siarhei
Sent: Tuesday, May 8, 2018 9:35 AM
To: HTCondor-Users Mail List <htcondor-users@xxxxxxxxxxx>
Subject: [External] [HTCondor-users] On-the-fly DAGs?


Hello,

Could you please let me know if it is possible to create on-the-fly DAGs in HTCondor? 

Here is an example: I work on some code and when it is ready I submit a number of jobs to job cluster 1000. After that I work on the next processing step and finish the needed code before the jobs in cluster 1000 are completed. I want to be able to say: start this next set of jobs when and if all the jobs in cluster 1000 are completed successfully, i.e. I want to create an "on-the-fly" DAG. The goal is to have some computing to be done on some steps of the workflow even before the whole workflow code is ready and keep adding to the workflow on the fly.

Thank you,
Siarhei.

............................................................................