[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] DAGman jobs failing custom requirements
- Date: Thu, 24 Jan 2013 16:30:24 -0600 (CST)
- From: "R. Kent Wenger" <wenger@xxxxxxxxxxx>
- Subject: Re: [HTCondor-users] DAGman jobs failing custom requirements
On Fri, 25 Jan 2013, Smithies, Russell wrote:
Yes, it's in APPEND_REQUIREMENTS in condor_config.
I'm going to try moving it to APPEND_REQ_VANILLA and APPEND_REQ_STANDARD to see if that helps, hopefully it shouldn't affect jobs running under the scheduler universe then.
I saw a post from 2 years ago that said "In the scheduler universe there is no way for any daemon to evaluate those requirements anyway as far as I know
because there is no matchmaking that goes on"
But according to the Sched log Requirements are failing so they must be getting checked.
DAGMan itself runs in the scheduler universe. But the individual node
jobs run in whatever universe is specified in their submit files, in your
case vanilla universe. So matchmaking *is* happening for the individual
demo.dag.lib.out and demo.dag.lib.err are zero size as the job never gets run.
Ah, I think I was misunderstanding your problem. It sounds like DAGMan
*itself* doesn't run, right? (I thought you were saying that DAGMan was
running, but the node jobs were not.)
If that's correct, I think that using APPEND_REQ_VANILLA, etc., instead
of APPEND_REQUIREMENTS should fix things. And I guess we should think
about whether APPEND_REQUIREMENTS shouldn't apply to scheduler universe