[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Condor-users] DAG destructor job


Does DAGMan maintain the DAG's state in memory, or does it re-read the 
.dag file after each subjob's execution?

The scenario I would like to implement is as follows:
in normal case subjobs A, B, C, D are executed
however, if either of A or B fails, the DAG skips ahead to D (which is a 
cleanup job - should be run regardless of the DAG's success or failure).

One possibility would be to update the .dag file and mark the jobs as done 
in the POST script of A. Do you think that would work? If not, do you have 
any hints on how to best implement a "DAG exception handler (or 
destructor) job"?

Jan Ploski

Dipl.-Inform. (FH) Jan Ploski
FuE Bereich Energie | R&D Division Energy
Escherweg 2  - 26121 Oldenburg - Germany
Phone/Fax: +49 441 9722 - 184 / 202
E-Mail: Jan.Ploski@xxxxxxxx
URL: http://www.offis.de