[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] set the state of a job to completed
- Date: Tue, 16 Aug 2005 00:08:19 +0300
- From: Mark Silberstein <marks@xxxxxxxxxxxxxxxxxxxxxxx>
- Subject: Re: [Condor-users] set the state of a job to completed
What about using NOOP jobs? I don't really know whether it will work
dynamically, i.e. after the job is already in the queue, but it is worth
>From the manual:
Jobs can now be submitted as ``noop'' jobs. Jobs submitted with noop_job
= true will not be executed by Condor, and instead will immediately have
a terminate event written to the job log file and removed from the
queue. This is useful for DAGs where the pre-script determines the job
should not run.
Let me know if it helps,
On Fri, 2005-08-12 at 18:09 +0200, Horvatth Szabolcs wrote:
> >It sounds like what you really want is to modify the DAG as it
> >executes (to remove these queued nodes). DAGMan doesn't support this.
> Sort of, since I'd also like to have dagman submit all child tasks of the completed job.
> My problem is general is the following:
> - Task A generates a lot of data and some of it is used by taskB that is a DAG child of task A.
> - Task A sometimes does not exit properly, although the computation is done.
> - If I restart task A by holding and releasing it it does the same, long computation again
> (and it has the chance that is again does not terminate properly)
> - I can't submit the child jobs of task A without having it completed.
> >If you want to get *really* kludgey, you could use condor_qedit to
> >change the job universe to scheduler and the executable to /bin/true.
> >I can't think of anything better.
> Hmm, this sounds pretty nasty. But I'll give it a try.
> Thanks Jaime.
> Condor-users mailing list