[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Fwd: Re: DAG error: "BAD EVENT: job (...) executing, total end count != 0 (1)"




Hi Mark,

Indeed I reran dags for Giuseppe -- always the same one (same tasks, same input parameters), with only different names -- until one logged a 'BAD EVENT'. Please find in attachment the dag.nodes.log and dag.dagmant.out files for that job (285314.0) that should be covered by the files that Giuseppe has already sent you. Let me know otherwise...

Cheers,

Nicolas

Le 15/02/2019 Ã 17:03, Mark Coatsworth a ÃcritÂ:
Hi Giuseppe, thanks for sending all this information.

Unfortunately the ShadowLog and SchedLog files you sent do not have any information about Nicolas' job (281392.0.0). This job ran on Feb 7, whereas the logs you sent only have information from Feb 14, so the information we need has already gone stale.

It's possible you'll find some information in ShadowLog.old and SchedLog.old (look for string "281392"), if so, please send those to me.

If there's no information in there, I'll need you to coordinate. Nicolas will need to run the jobs again until they produce the same error, then I'll need Giuseppe to send me the log files right away while they still contain the information we need.

Sorry for the inconvenience. There's just no way for us to diagnose this problem without the log output.

Mark


On Thu, Feb 14, 2019 at 9:17 AM Giuseppe Di Biase <giuseppe.dibiase@xxxxxxxxx <mailto:giuseppe.dibiase@xxxxxxxxx>> wrote:

    Hi Mark,

    i prefere to send you logs instead of creating a linux account for now.
    So i hope to find out some configuration error by logs.

    In attachment 4 files as requested.


    Thanks

    Giuseppe
     >
     >
     >>
     >>
     >> -------- Message transfÃrà --------
     >> SujetÂ:ÂÂÂÂ Re: [HTCondor-users] DAG error: "BAD EVENT: job (...)
     >> executing, total end count != 0 (1)"
     >> DateÂ:ÂÂÂÂ Wed, 13 Feb 2019 20:15:24 +0000
     >> DeÂ:ÂÂÂÂ Mark Coatsworth <coatsworth@xxxxxxxxxxx
    <mailto:coatsworth@xxxxxxxxxxx>>
     >> PourÂ:ÂÂÂÂ Nicolas Arnaud <narnaud@xxxxxxxxxxxx
    <mailto:narnaud@xxxxxxxxxxxx>>
     >> Copie ÃÂ:ÂÂÂÂ HTCondor-Users Mail List
    <htcondor-users@xxxxxxxxxxx <mailto:htcondor-users@xxxxxxxxxxx>>
     >>
     >> Hi Nicolas, thanks for all this information!
     >>
     >> I looked through your log files, it turns out this was not the
     >> problem I expected. The ULOG_EXECUTE event actually does appear
    twice
     >> in the log -- so it's not an issue with our log-reading code (which
     >> was the case with those bugs I mentioned). For some reason it looks
     >> like your schedd is actually executing the same job twice.
     >>
     >> We're going to need a few more things to help debug this. Could you
     >> please send me the following:
     >> * ScheddLog
     >> * ShadowLog
     >> * Your job classad (can retrieve this by running "condor_history
     >> -lÂ281392"
     >> * The output on your submit server from running "ps auxww | grep
    condor"
     >>
     >> Also, is there any way I can get a user account to log in to your
     >> submit server? We discussed this at our team meeting this
    morning and
     >> everybody thinks the problem is related to your environment. So it
     >> might be easier for us to debug if we can get access, rather than
     >> keep asking you to send us things over email.
     >>
     >> Mark
     >>
     >>
     >>
     >> On Tue, Feb 12, 2019 at 3:08 PM Nicolas Arnaud
    <narnaud@xxxxxxxxxxxx <mailto:narnaud@xxxxxxxxxxxx>
     >> <mailto:narnaud@xxxxxxxxxxxx <mailto:narnaud@xxxxxxxxxxxx>>> wrote:
     >>
     >>
     >> ÂÂÂ Hi Mark,
     >> ÂÂÂÂ > I've been looking into this.
     >>
     >> ÂÂÂ Thanks!
     >>
     >> ÂÂÂÂ > (...)
     >> ÂÂÂÂ > Are you running on Windows or Linux? It seems that all
    previous
     >> ÂÂÂÂ > occurrences of this problem happened on Windows.
     >>
     >> ÂÂÂ I'm running on Linux. Some information:
     >>
     >> ÂÂÂÂ > condor_version
     >> ÂÂÂÂ > $CondorVersion: 8.6.13 Oct 30 2018 BuildID: 453497 $
     >> ÂÂÂÂ > $CondorPlatform: x86_64_RedHat7 $
     >> ÂÂÂÂ > echo $UNAME
     >> ÂÂÂÂ > Linux-x86_64-CL7
     >>
     >> ÂÂÂÂ > These bugs were never resolved, although it seems like
    Kent spent
     >> ÂÂÂ some
     >> ÂÂÂÂ > time on them and determined the problem was most likely
    in the
     >> ÂÂÂÂ > log-reading code (so at the user level, not the farm).
    However
     >> ÂÂÂ it's hard
     >> ÂÂÂÂ > to tell without seeing what events are actually showing
    up in the
     >> ÂÂÂ log.
     >> ÂÂÂÂ > I'd like to try and reproduce this locally -- could you send
     >> your a)
     >> ÂÂÂÂ > .nodes.log file, b) .dagman.out file, c) full .dag file?
    These
     >> ÂÂÂ should
     >> ÂÂÂÂ > help me figure out where the bug is happening.
     >>
     >> ÂÂÂ Please find in attachment two sets of these three files:
     >>
     >> ÂÂÂÂ Â Â* those tagged "20190207_narnaud_2" correspond to a "BAD
    EVENT"
     >> ÂÂÂ case
     >> ÂÂÂ followed by a dag abort (DAGMAN_ALLOW_EVENTS = 114, the default
     >> value)
     >>
     >> ÂÂÂÂ Â Â* those tagged "20190212_narnaud_7" correspond to a "BAD
    EVENT"
     >> ÂÂÂ case,
     >> ÂÂÂ mitigated by DAGMAN_ALLOW_EVENTS = 5: the dag goes on until
     >> completion.
     >>
     >> ÂÂÂ As the dag file relies on independent sub files, I am also
     >> sending you
     >> ÂÂÂ the template sub file we're using to generate all the
    individual
     >> task
     >> ÂÂÂ sub files.
     >>
     >> ÂÂÂÂ > For a short term workaround, you could try adjusting the
    value of
     >> ÂÂÂÂ > DAGMAN_ALLOW_EVENTS to 5 like you suggested. It's true
    this could
     >> ÂÂÂ affect
     >> ÂÂÂÂ > the semantics, but I think the worst case is that DAGMan
    could
     >> ÂÂÂ get stuck
     >> ÂÂÂÂ > in a logical loop. If you're able to keep an eye on its
     >> progress and
     >> ÂÂÂÂ > manually abort if necessary, I think this should work.
     >>
     >> ÂÂÂ See above: indeed setting DAGMAN_ALLOW_EVENTS = 5 allows the
    dag to
     >> ÂÂÂ go on.
     >>
     >> ÂÂÂ The point is that since I've noticed this issue I am always
     >> running the
     >> ÂÂÂ "same" dag: the only thing that changes is its tag -- basically
     >> driving
     >> ÂÂÂ the output directory and used for many filenames. In about
    40% of
     >> the
     >> ÂÂÂ cases, I get a "BAD EVENT" error but each time it affects a
     >> different
     >> ÂÂÂ task and so happens at different times of the dag processing
    as the
     >> ÂÂÂ tasks have very different durations. While in about 60% of the
     >> cases,
     >> ÂÂÂ the dag completes fine w/o any "BAD EVENT".
     >>
     >> ÂÂÂ Let me know if you need more information or if anything is
    unclear.
     >>
     >> ÂÂÂ Cheers,
     >>
     >> ÂÂÂ Nicolas
     >>
     >> ÂÂÂÂ > Mark
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >
     >> ÂÂÂÂ > On Tue, Feb 12, 2019 at 2:42 AM Nicolas Arnaud
     >> ÂÂÂ <narnaud@xxxxxxxxxxxx <mailto:narnaud@xxxxxxxxxxxx>
    <mailto:narnaud@xxxxxxxxxxxx <mailto:narnaud@xxxxxxxxxxxx>>
     >> ÂÂÂÂ > <mailto:narnaud@xxxxxxxxxxxx
    <mailto:narnaud@xxxxxxxxxxxx> <mailto:narnaud@xxxxxxxxxxxx
    <mailto:narnaud@xxxxxxxxxxxx>>>>
     >> wrote:
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â ÂHello,
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â ÂI'm using a Condor farm to run dags containing a dozen of
     >> ÂÂÂ independent
     >> ÂÂÂÂ >Â Â Âtasks, each task being made of a few processes running
     >> ÂÂÂ sequentially
     >> ÂÂÂÂ >Â Â Âfollowing the parent/child logic. Lately I have
    encountered
     >> ÂÂÂ errors like
     >> ÂÂÂÂ >Â Â Âthe one below:
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Â > (...)
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 Event: ULOG_IMAGE_SIZE for
    HTCondor Node
     >> ÂÂÂÂ >Â Â Âtest_20190208_narnaud_virgo_status (281605.0.0) {02/08/19
     >> ÂÂÂ 00:30:06}
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 Event: ULOG_JOB_TERMINATED for
     >> HTCondor Node
     >> ÂÂÂÂ >Â Â Âtest_20190208_narnaud_virgo_status (281605.0.0) {02/08/19
     >> ÂÂÂ 00:30:06}
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 Number of idle job procs: 0
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 Node
     >> test_20190208_narnaud_virgo_status job
     >> ÂÂÂÂ >Â Â Âproc (281605.0.0) completed successfully.
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 Node
     >> test_20190208_narnaud_virgo_status job
     >> ÂÂÂÂ >Â Â Âcompleted
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 Event: ULOG_EXECUTE for HTCondor
    Node
     >> ÂÂÂÂ >Â Â Âtest_20190208_narnaud_virgo_status (281605.0.0) {02/08/19
     >> ÂÂÂ 00:30:07}
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 BAD EVENT: job (281605.0.0)
     >> executing, total
     >> ÂÂÂÂ >Â Â Âend count != 0 (1)
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 ERROR: aborting DAG because of
    bad event
     >> ÂÂÂ (BAD
     >> ÂÂÂÂ >Â Â ÂEVENT: job (281605.0.0) executing, total end count !=
    0 (1))
     >> ÂÂÂÂ >Â Â Â > (...)
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 ProcessLogEvents() returned false
     >> ÂÂÂÂ >Â Â Â > 02/08/19 00:30:10 Aborting DAG...
     >> ÂÂÂÂ >Â Â Â > (...)
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â ÂCondor correctly asseses one job as being successfully
     >> ÂÂÂ completed but it
     >> ÂÂÂÂ >Â Â Âseems that it starts executing it again immediately. Then
     >> ÂÂÂ there is a
     >> ÂÂÂÂ >Â Â Â"BAD EVENT" error and the DAG aborts, killing all the
    jobs
     >> ÂÂÂ that were
     >> ÂÂÂÂ >Â Â Ârunning.
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â ÂSo far this problem seems to occur randomly: some dags
     >> ÂÂÂ complete fine
     >> ÂÂÂÂ >Â Â Âwhile, when the problem occurs, the job that suffers
    from
     >> it is
     >> ÂÂÂÂ >Â Â Âdifferent each time. So are the machine and the slot
    on which
     >> ÂÂÂ that
     >> ÂÂÂÂ >Â Â Âparticular job is running.
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â ÂIn the above example, the dag snippet is fairly simple
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Â > (...)
     >> ÂÂÂÂ >Â Â Â > JOB test_20190208_narnaud_virgo_status
    virgo_status.sub
     >> ÂÂÂÂ >Â Â Â > VARS test_20190208_narnaud_virgo_status
     >> ÂÂÂÂ >
     Âinitialdir="/data/procdata/web/dqr/test_20190208_narnaud/dag"
     >> ÂÂÂÂ >Â Â Â > RETRY test_20190208_narnaud_virgo_status 1
     >> ÂÂÂÂ >Â Â Â > (...)
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Âand the sub file reads
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Â > universe = vanilla
     >> ÂÂÂÂ >Â Â Â > executable =
     >> ÂÂÂÂ >
     >>
     >>
     Â/users/narnaud/Software/RRT/Virgo/VirgoDQR/trunk/scripts/virgo_status.py
     >>
     >> ÂÂÂÂ >Â Â Â > arguments = "--event_gps 1233176418.54321 --event_id
     >> ÂÂÂÂ >Â Â Âtest_20190208_narnaud --data_stream
    /virgoData/ffl/raw.ffl
     >> ÂÂÂÂ >Â Â Â--output_dir /data/procdata/web/dqr/test_20190208_narnaud
     >> ÂÂÂÂ >Â Â Â--n_seconds_backward 10 --n_seconds_forward 10"
     >> ÂÂÂÂ >Â Â Â > priority = 10
     >> ÂÂÂÂ >Â Â Â > getenv = True
     >> ÂÂÂÂ >Â Â Â > error =
     >> ÂÂÂÂ >
     >>
     >>
     Â/data/procdata/web/dqr/test_20190208_narnaud/virgo_status/logs/$(cluster)-$(process)-$$(Name).err
     >>
     >> ÂÂÂÂ >Â Â Â > output =
     >> ÂÂÂÂ >
     >>
     >>
     Â/data/procdata/web/dqr/test_20190208_narnaud/virgo_status/logs/$(cluster)-$(process)-$$(Name).out
     >>
     >> ÂÂÂÂ >Â Â Â > notification = never
     >> ÂÂÂÂ >Â Â Â > +Experiment = "DetChar"
     >> ÂÂÂÂ >Â Â Â > +AccountingGroup=
    "virgo.prod.o3.detchar.transient.dqr"
     >> ÂÂÂÂ >Â Â Â > queue 1
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Â=> Would you know what could cause this error? And
    whether
     >> ÂÂÂ this is
     >> ÂÂÂÂ >Â Â Âat my
     >> ÂÂÂÂ >Â Â Âlevel (user) or at the level of the farm?
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Â=> And, until the problem is fixed, would there be a
    way to
     >> ÂÂÂ convince
     >> ÂÂÂÂ >Â Â Âthe
     >> ÂÂÂÂ >Â Â Âdag to continue instead of aborting? Possibly by
    modifying
     >> ÂÂÂ the default
     >> ÂÂÂÂ >Â Â Âvalue of the macro
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Â > DAGMAN_ALLOW_EVENTS = 114
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Â? But changing this value to 5 [!?] is said to "break the
     >> ÂÂÂ semantics of
     >> ÂÂÂÂ >Â Â Âthe DAG" => I'm not sure this is the right way to
    proceed.
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â ÂThanks in advance for your help,
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â ÂNicolas
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â Â_______________________________________________
     >> ÂÂÂÂ >Â Â ÂHTCondor-users mailing list
     >> ÂÂÂÂ >Â Â ÂTo unsubscribe, send a message to
     >> htcondor-users-request@xxxxxxxxxxx
    <mailto:htcondor-users-request@xxxxxxxxxxx>
     >> ÂÂÂ <mailto:htcondor-users-request@xxxxxxxxxxx
    <mailto:htcondor-users-request@xxxxxxxxxxx>>
     >> ÂÂÂÂ >Â Â Â<mailto:htcondor-users-request@xxxxxxxxxxx
    <mailto:htcondor-users-request@xxxxxxxxxxx>
     >> ÂÂÂ <mailto:htcondor-users-request@xxxxxxxxxxx
    <mailto:htcondor-users-request@xxxxxxxxxxx>>> with a
     >> ÂÂÂÂ >Â Â Âsubject: Unsubscribe
     >> ÂÂÂÂ >Â Â ÂYou can also unsubscribe by visiting
     >> ÂÂÂÂ > https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >Â Â ÂThe archives can be found at:
     >> ÂÂÂÂ > https://lists.cs.wisc.edu/archive/htcondor-users/
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >
     >> ÂÂÂÂ >
     >> ÂÂÂÂ > --
     >> ÂÂÂÂ > Mark Coatsworth
     >> ÂÂÂÂ > Systems Programmer
     >> ÂÂÂÂ > Center for High Throughput Computing
     >> ÂÂÂÂ > Department of Computer Sciences
     >> ÂÂÂÂ > University of Wisconsin-Madison
     >> ÂÂÂÂ > +1 608 206 4703
     >> ÂÂÂÂ >
     >> ÂÂÂÂ > _______________________________________________
     >> ÂÂÂÂ > HTCondor-users mailing list
     >> ÂÂÂÂ > To unsubscribe, send a message to
     >> htcondor-users-request@xxxxxxxxxxx
    <mailto:htcondor-users-request@xxxxxxxxxxx>
     >> ÂÂÂ <mailto:htcondor-users-request@xxxxxxxxxxx
    <mailto:htcondor-users-request@xxxxxxxxxxx>> with a
     >> ÂÂÂÂ > subject: Unsubscribe
     >> ÂÂÂÂ > You can also unsubscribe by visiting
     >> ÂÂÂÂ > https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
     >> ÂÂÂÂ >
     >> ÂÂÂÂ > The archives can be found at:
     >> ÂÂÂÂ > https://lists.cs.wisc.edu/archive/htcondor-users/
     >> ÂÂÂÂ >
     >>
     >>
     >>



--
Mark Coatsworth
Systems Programmer
Center for High Throughput Computing
Department of Computer Sciences
University of Wisconsin-Madison

--

==========================================
= Nicolas ARNAUD                         =
= Laboratoire de l'Accelerateur Lineaire =
= CNRS/IN2P3 & Università Paris-Sud      =
= Virgo Experiment                       =
=                                        =
= European Gravitational Observatory     =
= Via E. Amaldi, 5                       =
= 56021 Santo Stefano a Macerata         =
= Cascina (PI) -- Italia                 =
= Tel: + 39 050 752 314                  =
==========================================
000 (285332.000.000) 02/14 14:56:34 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_gps_numerology
...
000 (285333.000.000) 02/14 14:56:34 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_virgo_noise
...
000 (285334.000.000) 02/14 14:56:34 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_virgo_status
...
000 (285335.000.000) 02/14 14:56:34 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_dqprint_brmsmon
...
000 (285336.000.000) 02/14 14:56:34 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_dqprint_dqflags
...
001 (285333.000.000) 02/14 14:56:35 Job executing on host: <90.147.139.65:9618?addrs=90.147.139.65-9618+[--1]-9618&noUDP&sock=3362_a17e_3>
...
001 (285332.000.000) 02/14 14:56:36 Job executing on host: <90.147.139.75:9618?addrs=90.147.139.75-9618+[--1]-9618&noUDP&sock=3224_f96c_3>
...
001 (285334.000.000) 02/14 14:56:39 Job executing on host: <90.147.139.42:9618?addrs=90.147.139.42-9618+[--1]-9618&noUDP&sock=3236_348f_3>
...
000 (285347.000.000) 02/14 14:56:39 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanhoftV1
...
000 (285348.000.000) 02/14 14:56:39 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanhoftH1
...
000 (285349.000.000) 02/14 14:56:39 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanhoftL1
...
000 (285350.000.000) 02/14 14:56:39 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanfull2048
...
000 (285351.000.000) 02/14 14:56:39 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanfull512
...
006 (285332.000.000) 02/14 14:56:40 Image size of job updated: 85296
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285332.000.000) 02/14 14:56:40 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:01, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:01, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       15       15     90530
	   Memory (MB)          :        0        1         1
...
001 (285335.000.000) 02/14 14:56:40 Job executing on host: <90.147.139.75:9618?addrs=90.147.139.75-9618+[--1]-9618&noUDP&sock=3224_f96c_3>
...
006 (285333.000.000) 02/14 14:56:41 Image size of job updated: 300528
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285333.000.000) 02/14 14:56:41 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:03, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:03, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        1        1     90663
	   Memory (MB)          :        0        1         1
...
001 (285336.000.000) 02/14 14:56:42 Job executing on host: <90.147.139.65:9618?addrs=90.147.139.65-9618+[--1]-9618&noUDP&sock=3362_a17e_3>
...
000 (285358.000.000) 02/14 14:56:45 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronplot
...
000 (285359.000.000) 02/14 14:56:45 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_query_ingv_public_data
...
006 (285336.000.000) 02/14 14:56:45 Image size of job updated: 48208
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285336.000.000) 02/14 14:56:45 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:01, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:01, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       75       75     90663
	   Memory (MB)          :        0        1         1
...
000 (285360.000.000) 02/14 14:56:45 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_scan_logfiles
...
000 (285361.000.000) 02/14 14:56:45 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_decode_DMS_snapshots
...
000 (285362.000.000) 02/14 14:56:45 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_upv
...
006 (285335.000.000) 02/14 14:56:46 Image size of job updated: 29016
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285335.000.000) 02/14 14:56:46 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:01, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:01, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       75       75     90530
	   Memory (MB)          :        0        1         1
...
006 (285334.000.000) 02/14 14:56:48 Image size of job updated: 1540808
	1505  -  MemoryUsage of job (MB)
	1540804  -  ResidentSetSize of job (KB)
...
001 (285359.000.000) 02/14 14:56:49 Job executing on host: <90.147.139.58:9618?addrs=90.147.139.58-9618+[--1]-9618&noUDP&sock=3374_dca1_3>
...
001 (285350.000.000) 02/14 14:56:49 Job executing on host: <90.147.139.60:9618?addrs=90.147.139.60-9618+[--1]-9618&noUDP&sock=3379_4a9a_3>
...
001 (285351.000.000) 02/14 14:56:49 Job executing on host: <90.147.139.44:9618?addrs=90.147.139.44-9618+[--1]-9618&noUDP&sock=3351_15f3_3>
...
001 (285348.000.000) 02/14 14:56:49 Job executing on host: <90.147.139.61:9618?addrs=90.147.139.61-9618+[--1]-9618&noUDP&sock=3387_c75c_3>
...
001 (285349.000.000) 02/14 14:56:49 Job executing on host: <90.147.139.48:9618?addrs=90.147.139.48-9618+[--1]-9618&noUDP&sock=4056_a838_3>
...
001 (285347.000.000) 02/14 14:56:50 Job executing on host: <90.147.139.43:9618?addrs=90.147.139.43-9618+[--1]-9618&noUDP&sock=3358_e31d_3>
...
001 (285361.000.000) 02/14 14:56:50 Job executing on host: <90.147.139.78:9618?addrs=90.147.139.78-9618+[--1]-9618&noUDP&sock=3368_bf10_3>
...
001 (285362.000.000) 02/14 14:56:50 Job executing on host: <90.147.139.41:9618?addrs=90.147.139.41-9618+[--1]-9618&noUDP&sock=3366_5fdf_3>
...
001 (285358.000.000) 02/14 14:56:50 Job executing on host: <90.147.139.82:9618?addrs=90.147.139.82-9618+[--1]-9618&noUDP&sock=3373_2d09_3>
...
001 (285360.000.000) 02/14 14:56:50 Job executing on host: <90.147.139.51:9618?addrs=90.147.139.51-9618+[--1]-9618&noUDP&sock=3361_f1e6_3>
...
000 (285363.000.000) 02/14 14:56:50 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_bruco
...
000 (285364.000.000) 02/14 14:56:50 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_data_ref_comparison_INJ
...
000 (285365.000.000) 02/14 14:56:50 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_data_ref_comparison_ISC
...
000 (285366.000.000) 02/14 14:56:50 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_generate_dqr_json
...
000 (285367.000.000) 02/14 14:56:50 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_virgo_noise_json
...
001 (285363.000.000) 02/14 14:56:50 Job executing on host: <90.147.139.65:9618?addrs=90.147.139.65-9618+[--1]-9618&noUDP&sock=3362_a17e_3>
...
006 (285363.000.000) 02/14 14:56:50 Image size of job updated: 35
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285363.000.000) 02/14 14:56:50 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       35       35     90663
	   Memory (MB)          :        0        1         1
...
001 (285366.000.000) 02/14 14:56:50 Job executing on host: <90.147.139.65:9618?addrs=90.147.139.65-9618+[--1]-9618&noUDP&sock=3362_a17e_3>
...
001 (285364.000.000) 02/14 14:56:51 Job executing on host: <90.147.139.53:9618?addrs=90.147.139.53-9618+[--1]-9618&noUDP&sock=3369_6ea8_3>
...
001 (285365.000.000) 02/14 14:56:51 Job executing on host: <90.147.139.45:9618?addrs=90.147.139.45-9618+[--1]-9618&noUDP&sock=3356_83ed_3>
...
006 (285365.000.000) 02/14 14:56:51 Image size of job updated: 7
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285365.000.000) 02/14 14:56:51 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        7        7     90477
	   Memory (MB)          :        0        1         1
...
001 (285367.000.000) 02/14 14:56:53 Job executing on host: <90.147.139.45:9618?addrs=90.147.139.45-9618+[--1]-9618&noUDP&sock=3356_83ed_3>
...
006 (285367.000.000) 02/14 14:56:53 Image size of job updated: 2
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285367.000.000) 02/14 14:56:53 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        2        2     90477
	   Memory (MB)          :        0        1         1
...
006 (285366.000.000) 02/14 14:56:53 Image size of job updated: 10
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285366.000.000) 02/14 14:56:53 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:01, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:01, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       10       10     90663
	   Memory (MB)          :        0        1         1
...
006 (285364.000.000) 02/14 14:56:53 Image size of job updated: 7
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285364.000.000) 02/14 14:56:54 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        7        7     90410
	   Memory (MB)          :        0        1         1
...
000 (285369.000.000) 02/14 14:56:55 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_dqprint_dqflags_json
...
000 (285371.000.000) 02/14 14:56:55 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_dqprint_brmsmon_json
...
001 (285369.000.000) 02/14 14:56:56 Job executing on host: <90.147.139.64:9618?addrs=90.147.139.64-9618+[--1]-9618&noUDP&sock=3230_16fe_3>
...
006 (285358.000.000) 02/14 14:56:56 Image size of job updated: 317764
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285358.000.000) 02/14 14:56:56 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        2        2     90534
	   Memory (MB)          :        0        1         1
...
001 (285371.000.000) 02/14 14:56:56 Job executing on host: <90.147.139.100:9618?addrs=90.147.139.100-9618+[--1]-9618&noUDP&sock=3253_ddab_3>
...
006 (285362.000.000) 02/14 14:56:56 Image size of job updated: 17740
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285362.000.000) 02/14 14:56:56 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        2        2     90575
	   Memory (MB)          :        0        1         1
...
006 (285369.000.000) 02/14 14:56:56 Image size of job updated: 3
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285369.000.000) 02/14 14:56:56 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        3        3     90641
	   Memory (MB)          :        0        1         1
...
006 (285371.000.000) 02/14 14:56:56 Image size of job updated: 3
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285371.000.000) 02/14 14:56:56 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        3        3     90552
	   Memory (MB)          :        0        1         1
...
006 (285359.000.000) 02/14 14:56:57 Image size of job updated: 69900
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285359.000.000) 02/14 14:56:57 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:01, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:01, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        7        7     90585
	   Memory (MB)          :        0        1         1
...
006 (285347.000.000) 02/14 14:56:58 Image size of job updated: 5085100
	4966  -  MemoryUsage of job (MB)
	5085100  -  ResidentSetSize of job (KB)
...
006 (285361.000.000) 02/14 14:56:58 Image size of job updated: 75320
	74  -  MemoryUsage of job (MB)
	75316  -  ResidentSetSize of job (KB)
...
006 (285360.000.000) 02/14 14:56:58 Image size of job updated: 74116
	73  -  MemoryUsage of job (MB)
	74112  -  ResidentSetSize of job (KB)
...
006 (285348.000.000) 02/14 14:56:58 Image size of job updated: 5088884
	4970  -  MemoryUsage of job (MB)
	5088884  -  ResidentSetSize of job (KB)
...
006 (285351.000.000) 02/14 14:56:58 Image size of job updated: 1530580
	1495  -  MemoryUsage of job (MB)
	1530580  -  ResidentSetSize of job (KB)
...
006 (285350.000.000) 02/14 14:56:58 Image size of job updated: 2022152
	1975  -  MemoryUsage of job (MB)
	2022152  -  ResidentSetSize of job (KB)
...
006 (285349.000.000) 02/14 14:56:58 Image size of job updated: 5087252
	4969  -  MemoryUsage of job (MB)
	5087252  -  ResidentSetSize of job (KB)
...
006 (285361.000.000) 02/14 14:57:00 Image size of job updated: 82348
	74  -  MemoryUsage of job (MB)
	75332  -  ResidentSetSize of job (KB)
...
005 (285361.000.000) 02/14 14:57:00 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:01, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:01, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       17       17     90541
	   Memory (MB)          :       74        1         1
...
000 (285381.000.000) 02/14 14:57:01 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_bruco_std
...
000 (285382.000.000) 02/14 14:57:01 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_bruco_std-prev
...
000 (285383.000.000) 02/14 14:57:01 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_bruco_env
...
000 (285385.000.000) 02/14 14:57:02 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_bruco_env-prev
...
000 (285387.000.000) 02/14 14:57:02 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_data_ref_comparison_ISC_comparison
...
000 (285393.000.000) 02/14 14:57:08 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_data_ref_comparison_INJ_comparison
...
000 (285394.000.000) 02/14 14:57:08 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronplot_exe
...
000 (285395.000.000) 02/14 14:57:08 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_upv_exe
...
001 (285387.000.000) 02/14 14:57:10 Job executing on host: <90.147.139.45:9618?addrs=90.147.139.45-9618+[--1]-9618&noUDP&sock=3356_83ed_3>
...
001 (285395.000.000) 02/14 14:57:10 Job executing on host: <90.147.139.88:9618?addrs=90.147.139.88-9618+[--1]-9618&noUDP&sock=3364_00af_3>
...
001 (285394.000.000) 02/14 14:57:10 Job executing on host: <90.147.139.69:9618?addrs=90.147.139.69-9618+[--1]-9618&noUDP&sock=3360_424e_3>
...
001 (285383.000.000) 02/14 14:57:10 Job executing on host: <90.147.139.65:9618?addrs=90.147.139.65-9618+[--1]-9618&noUDP&sock=3362_a17e_3>
...
001 (285385.000.000) 02/14 14:57:10 Job executing on host: <90.147.139.43:9618?addrs=90.147.139.43-9618+[--1]-9618&noUDP&sock=3358_e31d_3>
...
001 (285393.000.000) 02/14 14:57:10 Job executing on host: <90.147.139.52:9618?addrs=90.147.139.52-9618+[--1]-9618&noUDP&sock=3351_15f3_3>
...
001 (285381.000.000) 02/14 14:57:10 Job executing on host: <90.147.139.42:9618?addrs=90.147.139.42-9618+[--1]-9618&noUDP&sock=3236_348f_3>
...
001 (285382.000.000) 02/14 14:57:10 Job executing on host: <90.147.139.53:9618?addrs=90.147.139.53-9618+[--1]-9618&noUDP&sock=3369_6ea8_3>
...
006 (285394.000.000) 02/14 14:57:15 Image size of job updated: 34860
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285394.000.000) 02/14 14:57:15 Job terminated.
	(1) Normal termination (return value 2)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :      100      100     90498
	   Memory (MB)          :        0        1         1
...
006 (285395.000.000) 02/14 14:57:18 Image size of job updated: 133080
	130  -  MemoryUsage of job (MB)
	133080  -  ResidentSetSize of job (KB)
...
006 (285387.000.000) 02/14 14:57:18 Image size of job updated: 159996
	157  -  MemoryUsage of job (MB)
	159992  -  ResidentSetSize of job (KB)
...
006 (285383.000.000) 02/14 14:57:18 Image size of job updated: 1793080
	1752  -  MemoryUsage of job (MB)
	1793076  -  ResidentSetSize of job (KB)
...
006 (285393.000.000) 02/14 14:57:18 Image size of job updated: 219072
	214  -  MemoryUsage of job (MB)
	219068  -  ResidentSetSize of job (KB)
...
006 (285385.000.000) 02/14 14:57:18 Image size of job updated: 82552
	81  -  MemoryUsage of job (MB)
	82500  -  ResidentSetSize of job (KB)
...
006 (285382.000.000) 02/14 14:57:18 Image size of job updated: 1315992
	1286  -  MemoryUsage of job (MB)
	1315988  -  ResidentSetSize of job (KB)
...
006 (285381.000.000) 02/14 14:57:18 Image size of job updated: 1793324
	1752  -  MemoryUsage of job (MB)
	1793320  -  ResidentSetSize of job (KB)
...
006 (285334.000.000) 02/14 14:57:37 Image size of job updated: 8390456
	1505  -  MemoryUsage of job (MB)
	1540804  -  ResidentSetSize of job (KB)
...
005 (285334.000.000) 02/14 14:57:37 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:22, Sys 0 00:00:05  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:22, Sys 0 00:00:05  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       30       30     90501
	   Memory (MB)          :     1505        1         1
...
006 (285347.000.000) 02/14 14:58:05 Image size of job updated: 5102868
	4966  -  MemoryUsage of job (MB)
	5085100  -  ResidentSetSize of job (KB)
...
005 (285347.000.000) 02/14 14:58:05 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:01:01, Sys 0 00:00:03  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:01:01, Sys 0 00:00:03  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       47       47     90454
	   Memory (MB)          :     4966        1         1
...
000 (285419.000.000) 02/14 14:58:13 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanhoftV1_json
...
001 (285419.000.000) 02/14 14:58:13 Job executing on host: <90.147.139.43:9618?addrs=90.147.139.43-9618+[--1]-9618&noUDP&sock=3358_e31d_3>
...
006 (285419.000.000) 02/14 14:58:15 Image size of job updated: 2924
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285419.000.000) 02/14 14:58:15 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        3        3     90454
	   Memory (MB)          :        0        1         1
...
006 (285393.000.000) 02/14 14:58:18 Image size of job updated: 526708
	214  -  MemoryUsage of job (MB)
	219068  -  ResidentSetSize of job (KB)
...
005 (285393.000.000) 02/14 14:58:18 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:01:02, Sys 0 00:00:03  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:01:02, Sys 0 00:00:03  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       27       27     90577
	   Memory (MB)          :      214        1         1
...
006 (285387.000.000) 02/14 14:59:42 Image size of job updated: 249176
	157  -  MemoryUsage of job (MB)
	159992  -  ResidentSetSize of job (KB)
...
005 (285387.000.000) 02/14 14:59:42 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:02:26, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:02:26, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       27       27     90477
	   Memory (MB)          :      157        1         1
...
006 (285348.000.000) 02/14 15:01:20 Image size of job updated: 5107048
	4970  -  MemoryUsage of job (MB)
	5088884  -  ResidentSetSize of job (KB)
...
005 (285348.000.000) 02/14 15:01:20 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:04:07, Sys 0 00:00:03  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:04:07, Sys 0 00:00:03  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       47       47     90432
	   Memory (MB)          :     4970        1         1
...
000 (285426.000.000) 02/14 15:01:29 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanhoftH1_json
...
001 (285426.000.000) 02/14 15:01:29 Job executing on host: <90.147.139.61:9618?addrs=90.147.139.61-9618+[--1]-9618&noUDP&sock=3387_c75c_3>
...
006 (285426.000.000) 02/14 15:01:30 Image size of job updated: 3
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285426.000.000) 02/14 15:01:30 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        3        3     90432
	   Memory (MB)          :        0        1         1
...
006 (285360.000.000) 02/14 15:01:58 Image size of job updated: 89004
	87  -  MemoryUsage of job (MB)
	89000  -  ResidentSetSize of job (KB)
...
006 (285350.000.000) 02/14 15:01:59 Image size of job updated: 2263508
	2211  -  MemoryUsage of job (MB)
	2263508  -  ResidentSetSize of job (KB)
...
006 (285349.000.000) 02/14 15:01:59 Image size of job updated: 5106976
	4988  -  MemoryUsage of job (MB)
	5106976  -  ResidentSetSize of job (KB)
...
006 (285351.000.000) 02/14 15:01:59 Image size of job updated: 2036496
	1989  -  MemoryUsage of job (MB)
	2036496  -  ResidentSetSize of job (KB)
...
006 (285395.000.000) 02/14 15:02:18 Image size of job updated: 200292
	196  -  MemoryUsage of job (MB)
	200292  -  ResidentSetSize of job (KB)
...
006 (285382.000.000) 02/14 15:02:18 Image size of job updated: 1324600
	1294  -  MemoryUsage of job (MB)
	1324596  -  ResidentSetSize of job (KB)
...
006 (285385.000.000) 02/14 15:02:18 Image size of job updated: 1325464
	1295  -  MemoryUsage of job (MB)
	1325200  -  ResidentSetSize of job (KB)
...
006 (285381.000.000) 02/14 15:02:18 Image size of job updated: 16629772
	15637  -  MemoryUsage of job (MB)
	16011580  -  ResidentSetSize of job (KB)
...
006 (285383.000.000) 02/14 15:02:18 Image size of job updated: 1801600
	1752  -  MemoryUsage of job (MB)
	1793076  -  ResidentSetSize of job (KB)
...
005 (285349.000.000) 02/14 15:02:18 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:04:53, Sys 0 00:00:03  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:04:53, Sys 0 00:00:03  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       47       47     90478
	   Memory (MB)          :     4988        1         1
...
000 (285431.000.000) 02/14 15:02:25 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanhoftL1_json
...
001 (285431.000.000) 02/14 15:02:25 Job executing on host: <90.147.139.48:9618?addrs=90.147.139.48-9618+[--1]-9618&noUDP&sock=4056_a838_3>
...
006 (285431.000.000) 02/14 15:02:26 Image size of job updated: 3
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285431.000.000) 02/14 15:02:26 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        3        3     90478
	   Memory (MB)          :        0        1         1
...
006 (285385.000.000) 02/14 15:04:02 Image size of job updated: 1342684
	1295  -  MemoryUsage of job (MB)
	1325200  -  ResidentSetSize of job (KB)
...
005 (285385.000.000) 02/14 15:04:02 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:05:53, Sys 0 00:00:07  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:05:53, Sys 0 00:00:07  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       35       35     90454
	   Memory (MB)          :     1295        1         1
...
005 (285395.000.000) 02/14 15:04:33 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:04:29, Sys 0 00:00:16  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:04:29, Sys 0 00:00:16  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       75       75     90538
	   Memory (MB)          :      196        1         1
...
000 (285433.000.000) 02/14 15:04:41 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_upv_json
...
001 (285433.000.000) 02/14 15:04:41 Job executing on host: <90.147.139.69:9618?addrs=90.147.139.69-9618+[--1]-9618&noUDP&sock=3360_424e_3>
...
006 (285433.000.000) 02/14 15:04:43 Image size of job updated: 15
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285433.000.000) 02/14 15:04:43 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       15       15     90498
	   Memory (MB)          :        0        1         1
...
006 (285350.000.000) 02/14 15:06:59 Image size of job updated: 2352724
	2298  -  MemoryUsage of job (MB)
	2352724  -  ResidentSetSize of job (KB)
...
006 (285351.000.000) 02/14 15:06:59 Image size of job updated: 2308136
	2255  -  MemoryUsage of job (MB)
	2308136  -  ResidentSetSize of job (KB)
...
006 (285360.000.000) 02/14 15:06:59 Image size of job updated: 89028
	87  -  MemoryUsage of job (MB)
	89024  -  ResidentSetSize of job (KB)
...
006 (285381.000.000) 02/14 15:07:19 Image size of job updated: 16629772
	15771  -  MemoryUsage of job (MB)
	16148796  -  ResidentSetSize of job (KB)
...
006 (285383.000.000) 02/14 15:07:19 Image size of job updated: 1802128
	1752  -  MemoryUsage of job (MB)
	1793076  -  ResidentSetSize of job (KB)
...
006 (285350.000.000) 02/14 15:11:59 Image size of job updated: 2537728
	2479  -  MemoryUsage of job (MB)
	2537728  -  ResidentSetSize of job (KB)
...
006 (285351.000.000) 02/14 15:12:00 Image size of job updated: 2475736
	2418  -  MemoryUsage of job (MB)
	2475736  -  ResidentSetSize of job (KB)
...
006 (285383.000.000) 02/14 15:12:19 Image size of job updated: 1802396
	1752  -  MemoryUsage of job (MB)
	1793076  -  ResidentSetSize of job (KB)
...
006 (285351.000.000) 02/14 15:16:25 Image size of job updated: 2999684
	2418  -  MemoryUsage of job (MB)
	2475736  -  ResidentSetSize of job (KB)
...
005 (285351.000.000) 02/14 15:16:25 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:10:20, Sys 0 00:00:39  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:10:20, Sys 0 00:00:39  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       47       47     90386
	   Memory (MB)          :     2418        1         1
...
000 (285442.000.000) 02/14 15:16:34 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanfull512_json
...
001 (285442.000.000) 02/14 15:16:34 Job executing on host: <90.147.139.52:9618?addrs=90.147.139.52-9618+[--1]-9618&noUDP&sock=3351_15f3_3>
...
006 (285442.000.000) 02/14 15:16:34 Image size of job updated: 3
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285442.000.000) 02/14 15:16:34 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        3        3     90577
	   Memory (MB)          :        0        1         1
...
006 (285350.000.000) 02/14 15:17:01 Image size of job updated: 2807404
	2742  -  MemoryUsage of job (MB)
	2807404  -  ResidentSetSize of job (KB)
...
006 (285360.000.000) 02/14 15:22:00 Image size of job updated: 89044
	87  -  MemoryUsage of job (MB)
	89040  -  ResidentSetSize of job (KB)
...
006 (285350.000.000) 02/14 15:22:01 Image size of job updated: 3096144
	3024  -  MemoryUsage of job (MB)
	3096144  -  ResidentSetSize of job (KB)
...
006 (285383.000.000) 02/14 15:22:20 Image size of job updated: 1803764
	1752  -  MemoryUsage of job (MB)
	1793076  -  ResidentSetSize of job (KB)
...
006 (285350.000.000) 02/14 15:27:01 Image size of job updated: 3349984
	3272  -  MemoryUsage of job (MB)
	3349984  -  ResidentSetSize of job (KB)
...
006 (285360.000.000) 02/14 15:27:01 Image size of job updated: 89048
	87  -  MemoryUsage of job (MB)
	89044  -  ResidentSetSize of job (KB)
...
006 (285383.000.000) 02/14 15:27:21 Image size of job updated: 1803768
	1752  -  MemoryUsage of job (MB)
	1793076  -  ResidentSetSize of job (KB)
...
005 (285383.000.000) 02/14 15:30:19 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:30:58, Sys 0 00:00:11  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:30:58, Sys 0 00:00:11  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       35       35     90663
	   Memory (MB)          :     1752        1         1
...
001 (285383.000.000) 02/14 15:30:21 Job executing on host: <90.147.139.65:9618?addrs=90.147.139.65-9618+[--1]-9618&noUDP&sock=3362_a17e_3>
...
006 (285383.000.000) 02/14 15:30:29 Image size of job updated: 68644
	67  -  MemoryUsage of job (MB)
	68500  -  ResidentSetSize of job (KB)
...
006 (285383.000.000) 02/14 15:31:05 Image size of job updated: 8393792
	67  -  MemoryUsage of job (MB)
	68500  -  ResidentSetSize of job (KB)
...
005 (285383.000.000) 02/14 15:31:05 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:21, Sys 0 00:00:05  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:21, Sys 0 00:00:05  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       30       30     90663
	   Memory (MB)          :       67        1         1
...
006 (285350.000.000) 02/14 15:32:01 Image size of job updated: 3520660
	3439  -  MemoryUsage of job (MB)
	3520660  -  ResidentSetSize of job (KB)
...
006 (285360.000.000) 02/14 15:32:02 Image size of job updated: 89052
	87  -  MemoryUsage of job (MB)
	89048  -  ResidentSetSize of job (KB)
...
005 (285382.000.000) 02/14 15:34:02 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:30:02, Sys 0 00:00:44  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:30:02, Sys 0 00:00:44  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       35       35     90410
	   Memory (MB)          :     1294        1         1
...
006 (285350.000.000) 02/14 15:37:02 Image size of job updated: 3677812
	3592  -  MemoryUsage of job (MB)
	3677812  -  ResidentSetSize of job (KB)
...
006 (285350.000.000) 02/14 15:42:02 Image size of job updated: 4054476
	3960  -  MemoryUsage of job (MB)
	4054476  -  ResidentSetSize of job (KB)
...
006 (285350.000.000) 02/14 15:47:02 Image size of job updated: 4389684
	4287  -  MemoryUsage of job (MB)
	4389684  -  ResidentSetSize of job (KB)
...
006 (285350.000.000) 02/14 15:48:55 Image size of job updated: 4450700
	4287  -  MemoryUsage of job (MB)
	4389684  -  ResidentSetSize of job (KB)
...
005 (285350.000.000) 02/14 15:48:55 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:33:56, Sys 0 00:02:30  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:33:56, Sys 0 00:02:30  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       47       47     90599
	   Memory (MB)          :     4287        1         1
...
000 (285452.000.000) 02/14 15:49:02 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_omicronscanfull2048_json
...
001 (285452.000.000) 02/14 15:49:03 Job executing on host: <90.147.139.60:9618?addrs=90.147.139.60-9618+[--1]-9618&noUDP&sock=3379_4a9a_3>
...
006 (285452.000.000) 02/14 15:49:07 Image size of job updated: 3
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285452.000.000) 02/14 15:49:07 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        3        3     90599
	   Memory (MB)          :        0        1         1
...
005 (285360.000.000) 02/14 15:49:12 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:01:25, Sys 0 00:00:25  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:01:25, Sys 0 00:00:25  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       20       20     90503
	   Memory (MB)          :       87        1         1
...
005 (285381.000.000) 02/14 16:49:28 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 01:40:37, Sys 0 00:07:39  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 01:40:37, Sys 0 00:07:39  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :       35       35     90501
	   Memory (MB)          :    15771        1         1
...
000 (285468.000.000) 02/14 16:49:34 Job submitted from host: <90.147.139.39:9618?addrs=90.147.139.39-9618+[--1]-9618&noUDP&sock=4773_75a0_3>
    DAG Node: test_20190214_narnaud_9_bruco_json
...
001 (285468.000.000) 02/14 16:49:34 Job executing on host: <90.147.139.42:9618?addrs=90.147.139.42-9618+[--1]-9618&noUDP&sock=3236_348f_3>
...
006 (285468.000.000) 02/14 16:49:35 Image size of job updated: 3
	0  -  MemoryUsage of job (MB)
	0  -  ResidentSetSize of job (KB)
...
005 (285468.000.000) 02/14 16:49:35 Job terminated.
	(1) Normal termination (return value 0)
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Run Local Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Remote Usage
		Usr 0 00:00:00, Sys 0 00:00:00  -  Total Local Usage
	0  -  Run Bytes Sent By Job
	0  -  Run Bytes Received By Job
	0  -  Total Bytes Sent By Job
	0  -  Total Bytes Received By Job
	Partitionable Resources :    Usage  Request Allocated
	   Cpus                 :                 1         1
	   Disk (KB)            :        3        3     90501
	   Memory (MB)          :        0        1         1
...
02/14/19 14:56:30 ******************************************************
02/14/19 14:56:30 ** condor_scheduniv_exec.285314.0 (CONDOR_DAGMAN) STARTING UP
02/14/19 14:56:30 ** /usr/bin/condor_dagman
02/14/19 14:56:30 ** SubsystemInfo: name=DAGMAN type=DAGMAN(10) class=DAEMON(1)
02/14/19 14:56:30 ** Configuration: subsystem:DAGMAN local:<NONE> class:DAEMON
02/14/19 14:56:30 ** $CondorVersion: 8.6.13 Oct 30 2018 BuildID: 453497 $
02/14/19 14:56:30 ** $CondorPlatform: x86_64_RedHat7 $
02/14/19 14:56:30 ** PID = 2220820
02/14/19 14:56:30 ** Log last touched time unavailable (No such file or directory)
02/14/19 14:56:30 ******************************************************
02/14/19 14:56:30 Using config source: /etc/condor/condor_config
02/14/19 14:56:30 Using local config sources: 
02/14/19 14:56:30    /etc/condor/condor_config.local
02/14/19 14:56:30 config Macros = 205, Sorted = 205, StringBytes = 7386, TablesBytes = 7428
02/14/19 14:56:30 CLASSAD_CACHING is ENABLED
02/14/19 14:56:30 Daemon Log is logging: D_ALWAYS D_ERROR
02/14/19 14:56:30 DaemonCore: No command port requested.
02/14/19 14:56:30 Using DAGMan config file: /virgoData/VirgoDQR/Parameters/dag.config
02/14/19 14:56:30 DAGMAN_USE_STRICT setting: 1
02/14/19 14:56:30 DAGMAN_VERBOSITY setting: 3
02/14/19 14:56:30 DAGMAN_DEBUG_CACHE_SIZE setting: 5242880
02/14/19 14:56:30 DAGMAN_DEBUG_CACHE_ENABLE setting: False
02/14/19 14:56:30 DAGMAN_SUBMIT_DELAY setting: 0
02/14/19 14:56:30 DAGMAN_MAX_SUBMIT_ATTEMPTS setting: 6
02/14/19 14:56:30 DAGMAN_STARTUP_CYCLE_DETECT setting: False
02/14/19 14:56:30 DAGMAN_MAX_SUBMITS_PER_INTERVAL setting: 5
02/14/19 14:56:30 DAGMAN_USER_LOG_SCAN_INTERVAL setting: 5
02/14/19 14:56:30 DAGMAN_DEFAULT_PRIORITY setting: 0
02/14/19 14:56:30 DAGMAN_SUPPRESS_NOTIFICATION setting: True
02/14/19 14:56:30 allow_events (DAGMAN_ALLOW_EVENTS) setting: 5
02/14/19 14:56:30 DAGMAN_RETRY_SUBMIT_FIRST setting: True
02/14/19 14:56:30 DAGMAN_RETRY_NODE_FIRST setting: False
02/14/19 14:56:30 DAGMAN_MAX_JOBS_IDLE setting: 1000
02/14/19 14:56:30 DAGMAN_MAX_JOBS_SUBMITTED setting: 0
02/14/19 14:56:30 DAGMAN_MAX_PRE_SCRIPTS setting: 20
02/14/19 14:56:30 DAGMAN_MAX_POST_SCRIPTS setting: 20
02/14/19 14:56:30 DAGMAN_MUNGE_NODE_NAMES setting: True
02/14/19 14:56:30 DAGMAN_PROHIBIT_MULTI_JOBS setting: False
02/14/19 14:56:30 DAGMAN_SUBMIT_DEPTH_FIRST setting: False
02/14/19 14:56:30 DAGMAN_ALWAYS_RUN_POST setting: False
02/14/19 14:56:30 DAGMAN_ABORT_DUPLICATES setting: True
02/14/19 14:56:30 DAGMAN_ABORT_ON_SCARY_SUBMIT setting: True
02/14/19 14:56:30 DAGMAN_PENDING_REPORT_INTERVAL setting: 600
02/14/19 14:56:30 DAGMAN_AUTO_RESCUE setting: True
02/14/19 14:56:30 DAGMAN_MAX_RESCUE_NUM setting: 100
02/14/19 14:56:30 DAGMAN_WRITE_PARTIAL_RESCUE setting: True
02/14/19 14:56:30 DAGMAN_DEFAULT_NODE_LOG setting: @(DAG_DIR)/@(DAG_FILE).nodes.log
02/14/19 14:56:30 DAGMAN_GENERATE_SUBDAG_SUBMITS setting: True
02/14/19 14:56:30 DAGMAN_MAX_JOB_HOLDS setting: 100
02/14/19 14:56:30 DAGMAN_HOLD_CLAIM_TIME setting: 20
02/14/19 14:56:30 ALL_DEBUG setting: 
02/14/19 14:56:30 DAGMAN_DEBUG setting: 
02/14/19 14:56:30 DAGMAN_SUPPRESS_JOB_LOGS setting: False
02/14/19 14:56:30 DAGMAN_REMOVE_NODE_JOBS setting: True
02/14/19 14:56:30 argv[0] == "condor_scheduniv_exec.285314.0"
02/14/19 14:56:30 argv[1] == "-Lockfile"
02/14/19 14:56:30 argv[2] == "dqr_test_20190214_narnaud_9.dag.lock"
02/14/19 14:56:30 argv[3] == "-AutoRescue"
02/14/19 14:56:30 argv[4] == "1"
02/14/19 14:56:30 argv[5] == "-DoRescueFrom"
02/14/19 14:56:30 argv[6] == "0"
02/14/19 14:56:30 argv[7] == "-Dag"
02/14/19 14:56:30 argv[8] == "dqr_test_20190214_narnaud_9.dag"
02/14/19 14:56:30 argv[9] == "-Suppress_notification"
02/14/19 14:56:30 argv[10] == "-CsdVersion"
02/14/19 14:56:30 argv[11] == "$CondorVersion: 8.6.13 Oct 30 2018 BuildID: 453497 $"
02/14/19 14:56:30 argv[12] == "-Dagman"
02/14/19 14:56:30 argv[13] == "/usr/bin/condor_dagman"
02/14/19 14:56:30 Workflow batch-name: <dqr_test_20190214_narnaud_9.dag+285314>
02/14/19 14:56:30 Workflow accounting_group: <>
02/14/19 14:56:30 Workflow accounting_group_user: <>
02/14/19 14:56:30 Warning: failed to get attribute DAGNodeName
02/14/19 14:56:30 DAGMAN_LOG_ON_NFS_IS_ERROR setting: False
02/14/19 14:56:30 Default node log file is: </data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log>
02/14/19 14:56:30 DAG Lockfile will be written to dqr_test_20190214_narnaud_9.dag.lock
02/14/19 14:56:30 DAG Input file is dqr_test_20190214_narnaud_9.dag
02/14/19 14:56:30 Parsing 1 dagfiles
02/14/19 14:56:30 Parsing dqr_test_20190214_narnaud_9.dag ...
02/14/19 14:56:30 Dag contains 38 total jobs
02/14/19 14:56:30 Sleeping for 3 seconds to ensure ProcessId uniqueness
02/14/19 14:56:33 Bootstrapping...
02/14/19 14:56:33 Number of pre-completed nodes: 0
02/14/19 14:56:33 MultiLogFiles: truncating log file /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:33 DAG status: 0 (DAG_STATUS_OK)
02/14/19 14:56:33 Of 38 nodes total:
02/14/19 14:56:33  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:56:33   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:56:33     0       0        0       0      19         19        0
02/14/19 14:56:33 0 job proc(s) currently held
02/14/19 14:56:33 Registering condor_event_timer...
02/14/19 14:56:34 Submitting HTCondor Node test_20190214_narnaud_9_gps_numerology job(s)...
02/14/19 14:56:34 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:34 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:34 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:34 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_gps_numerology -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_gps_numerology -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" gps_numerology.sub
02/14/19 14:56:34 From submit: Submitting job(s).
02/14/19 14:56:34 From submit: 1 job(s) submitted to cluster 285332.
02/14/19 14:56:34 	assigned HTCondor ID (285332.0.0)
02/14/19 14:56:34 Submitting HTCondor Node test_20190214_narnaud_9_virgo_noise job(s)...
02/14/19 14:56:34 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:34 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:34 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:34 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_virgo_noise -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_virgo_noise -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" virgo_noise.sub
02/14/19 14:56:34 From submit: Submitting job(s).
02/14/19 14:56:34 From submit: 1 job(s) submitted to cluster 285333.
02/14/19 14:56:34 	assigned HTCondor ID (285333.0.0)
02/14/19 14:56:34 Submitting HTCondor Node test_20190214_narnaud_9_virgo_status job(s)...
02/14/19 14:56:34 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:34 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:34 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:34 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_virgo_status -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_virgo_status -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" virgo_status.sub
02/14/19 14:56:34 From submit: Submitting job(s).
02/14/19 14:56:34 From submit: 1 job(s) submitted to cluster 285334.
02/14/19 14:56:34 	assigned HTCondor ID (285334.0.0)
02/14/19 14:56:34 Submitting HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon job(s)...
02/14/19 14:56:34 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:34 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:34 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:34 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_dqprint_brmsmon -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_dqprint_brmsmon -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" dqprint_brmsmon.sub
02/14/19 14:56:34 From submit: Submitting job(s).
02/14/19 14:56:34 From submit: 1 job(s) submitted to cluster 285335.
02/14/19 14:56:34 	assigned HTCondor ID (285335.0.0)
02/14/19 14:56:34 Submitting HTCondor Node test_20190214_narnaud_9_dqprint_dqflags job(s)...
02/14/19 14:56:34 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:34 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:34 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:34 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_dqprint_dqflags -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_dqprint_dqflags -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" dqprint_dqflags.sub
02/14/19 14:56:34 From submit: Submitting job(s).
02/14/19 14:56:34 From submit: 1 job(s) submitted to cluster 285336.
02/14/19 14:56:34 	assigned HTCondor ID (285336.0.0)
02/14/19 14:56:34 Just submitted 5 jobs this cycle...
02/14/19 14:56:34 DAG status: 0 (DAG_STATUS_OK)
02/14/19 14:56:34 Of 38 nodes total:
02/14/19 14:56:34  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:56:34   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:56:34     0       0        5       0      14         19        0
02/14/19 14:56:34 0 job proc(s) currently held
02/14/19 14:56:39 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1 job(s)...
02/14/19 14:56:39 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:39 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:39 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:39 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanhoftV1 -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanhoftV1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftV1.sub
02/14/19 14:56:39 From submit: Submitting job(s).
02/14/19 14:56:39 From submit: 1 job(s) submitted to cluster 285347.
02/14/19 14:56:39 	assigned HTCondor ID (285347.0.0)
02/14/19 14:56:39 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1 job(s)...
02/14/19 14:56:39 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:39 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:39 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:39 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanhoftH1 -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanhoftH1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftH1.sub
02/14/19 14:56:39 From submit: Submitting job(s).
02/14/19 14:56:39 From submit: 1 job(s) submitted to cluster 285348.
02/14/19 14:56:39 	assigned HTCondor ID (285348.0.0)
02/14/19 14:56:39 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1 job(s)...
02/14/19 14:56:39 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:39 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:39 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:39 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanhoftL1 -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanhoftL1 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanhoftL1.sub
02/14/19 14:56:39 From submit: Submitting job(s).
02/14/19 14:56:39 From submit: 1 job(s) submitted to cluster 285349.
02/14/19 14:56:39 	assigned HTCondor ID (285349.0.0)
02/14/19 14:56:39 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 job(s)...
02/14/19 14:56:39 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:39 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:39 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:39 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanfull2048 -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanfull2048 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanfull2048.sub
02/14/19 14:56:39 From submit: Submitting job(s).
02/14/19 14:56:39 From submit: 1 job(s) submitted to cluster 285350.
02/14/19 14:56:39 	assigned HTCondor ID (285350.0.0)
02/14/19 14:56:39 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanfull512 job(s)...
02/14/19 14:56:39 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:39 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:39 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:39 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanfull512 -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanfull512 -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronscanfull512.sub
02/14/19 14:56:39 From submit: Submitting job(s).
02/14/19 14:56:39 From submit: 1 job(s) submitted to cluster 285351.
02/14/19 14:56:39 	assigned HTCondor ID (285351.0.0)
02/14/19 14:56:39 Just submitted 5 jobs this cycle...
02/14/19 14:56:39 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_gps_numerology from (285332.0.0) to (285332.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_gps_numerology (285332.0.0) {02/14/19 14:56:34}
02/14/19 14:56:39 Number of idle job procs: 1
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_virgo_noise from (285333.0.0) to (285333.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_virgo_noise (285333.0.0) {02/14/19 14:56:34}
02/14/19 14:56:39 Number of idle job procs: 2
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_virgo_status from (285334.0.0) to (285334.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_virgo_status (285334.0.0) {02/14/19 14:56:34}
02/14/19 14:56:39 Number of idle job procs: 3
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_dqprint_brmsmon from (285335.0.0) to (285335.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon (285335.0.0) {02/14/19 14:56:34}
02/14/19 14:56:39 Number of idle job procs: 4
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_dqprint_dqflags from (285336.0.0) to (285336.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_dqprint_dqflags (285336.0.0) {02/14/19 14:56:34}
02/14/19 14:56:39 Number of idle job procs: 5
02/14/19 14:56:39 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_virgo_noise (285333.0.0) {02/14/19 14:56:35}
02/14/19 14:56:39 Number of idle job procs: 4
02/14/19 14:56:39 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_gps_numerology (285332.0.0) {02/14/19 14:56:36}
02/14/19 14:56:39 Number of idle job procs: 3
02/14/19 14:56:39 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_virgo_status (285334.0.0) {02/14/19 14:56:39}
02/14/19 14:56:39 Number of idle job procs: 2
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_omicronscanhoftV1 from (285347.0.0) to (285347.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1 (285347.0.0) {02/14/19 14:56:39}
02/14/19 14:56:39 Number of idle job procs: 3
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_omicronscanhoftH1 from (285348.0.0) to (285348.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1 (285348.0.0) {02/14/19 14:56:39}
02/14/19 14:56:39 Number of idle job procs: 4
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_omicronscanhoftL1 from (285349.0.0) to (285349.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1 (285349.0.0) {02/14/19 14:56:39}
02/14/19 14:56:39 Number of idle job procs: 5
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_omicronscanfull2048 from (285350.0.0) to (285350.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 14:56:39}
02/14/19 14:56:39 Number of idle job procs: 6
02/14/19 14:56:39 Reassigning the id of job test_20190214_narnaud_9_omicronscanfull512 from (285351.0.0) to (285351.0.0)
02/14/19 14:56:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanfull512 (285351.0.0) {02/14/19 14:56:39}
02/14/19 14:56:39 Number of idle job procs: 7
02/14/19 14:56:39 DAG status: 0 (DAG_STATUS_OK)
02/14/19 14:56:39 Of 38 nodes total:
02/14/19 14:56:39  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:56:39   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:56:39     0       0       10       0       9         19        0
02/14/19 14:56:39 0 job proc(s) currently held
02/14/19 14:56:45 Submitting HTCondor Node test_20190214_narnaud_9_omicronplot job(s)...
02/14/19 14:56:45 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:45 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:45 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:45 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronplot -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronplot -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" omicronplot.sub
02/14/19 14:56:45 From submit: Submitting job(s).
02/14/19 14:56:45 From submit: 1 job(s) submitted to cluster 285358.
02/14/19 14:56:45 	assigned HTCondor ID (285358.0.0)
02/14/19 14:56:45 Submitting HTCondor Node test_20190214_narnaud_9_query_ingv_public_data job(s)...
02/14/19 14:56:45 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:45 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:45 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:45 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_query_ingv_public_data -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_query_ingv_public_data -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" query_ingv_public_data.sub
02/14/19 14:56:45 From submit: Submitting job(s).
02/14/19 14:56:45 From submit: 1 job(s) submitted to cluster 285359.
02/14/19 14:56:45 	assigned HTCondor ID (285359.0.0)
02/14/19 14:56:45 Submitting HTCondor Node test_20190214_narnaud_9_scan_logfiles job(s)...
02/14/19 14:56:45 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:45 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:45 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:45 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_scan_logfiles -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_scan_logfiles -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" scan_logfiles.sub
02/14/19 14:56:45 From submit: Submitting job(s).
02/14/19 14:56:45 From submit: 1 job(s) submitted to cluster 285360.
02/14/19 14:56:45 	assigned HTCondor ID (285360.0.0)
02/14/19 14:56:45 Submitting HTCondor Node test_20190214_narnaud_9_decode_DMS_snapshots job(s)...
02/14/19 14:56:45 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:45 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:45 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:45 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_decode_DMS_snapshots -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_decode_DMS_snapshots -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" decode_DMS_snapshots.sub
02/14/19 14:56:45 From submit: Submitting job(s).
02/14/19 14:56:45 From submit: 1 job(s) submitted to cluster 285361.
02/14/19 14:56:45 	assigned HTCondor ID (285361.0.0)
02/14/19 14:56:45 Submitting HTCondor Node test_20190214_narnaud_9_upv job(s)...
02/14/19 14:56:45 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:45 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:45 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:45 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_upv -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_upv -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" upv.sub
02/14/19 14:56:45 From submit: Submitting job(s).
02/14/19 14:56:45 From submit: 1 job(s) submitted to cluster 285362.
02/14/19 14:56:45 	assigned HTCondor ID (285362.0.0)
02/14/19 14:56:45 Just submitted 5 jobs this cycle...
02/14/19 14:56:45 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:56:45 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_gps_numerology (285332.0.0) {02/14/19 14:56:40}
02/14/19 14:56:45 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_gps_numerology (285332.0.0) {02/14/19 14:56:40}
02/14/19 14:56:45 Number of idle job procs: 7
02/14/19 14:56:45 Node test_20190214_narnaud_9_gps_numerology job proc (285332.0.0) completed successfully.
02/14/19 14:56:45 Node test_20190214_narnaud_9_gps_numerology job completed
02/14/19 14:56:45 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon (285335.0.0) {02/14/19 14:56:40}
02/14/19 14:56:45 Number of idle job procs: 6
02/14/19 14:56:45 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_virgo_noise (285333.0.0) {02/14/19 14:56:41}
02/14/19 14:56:45 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_virgo_noise (285333.0.0) {02/14/19 14:56:41}
02/14/19 14:56:45 Number of idle job procs: 6
02/14/19 14:56:45 Node test_20190214_narnaud_9_virgo_noise job proc (285333.0.0) completed successfully.
02/14/19 14:56:45 Node test_20190214_narnaud_9_virgo_noise job completed
02/14/19 14:56:45 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_dqprint_dqflags (285336.0.0) {02/14/19 14:56:42}
02/14/19 14:56:45 Number of idle job procs: 5
02/14/19 14:56:45 Reassigning the id of job test_20190214_narnaud_9_omicronplot from (285358.0.0) to (285358.0.0)
02/14/19 14:56:45 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronplot (285358.0.0) {02/14/19 14:56:45}
02/14/19 14:56:45 Number of idle job procs: 6
02/14/19 14:56:45 Reassigning the id of job test_20190214_narnaud_9_query_ingv_public_data from (285359.0.0) to (285359.0.0)
02/14/19 14:56:45 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_query_ingv_public_data (285359.0.0) {02/14/19 14:56:45}
02/14/19 14:56:45 Number of idle job procs: 7
02/14/19 14:56:45 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_dqprint_dqflags (285336.0.0) {02/14/19 14:56:45}
02/14/19 14:56:45 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_dqprint_dqflags (285336.0.0) {02/14/19 14:56:45}
02/14/19 14:56:45 Number of idle job procs: 7
02/14/19 14:56:45 Node test_20190214_narnaud_9_dqprint_dqflags job proc (285336.0.0) completed successfully.
02/14/19 14:56:45 Node test_20190214_narnaud_9_dqprint_dqflags job completed
02/14/19 14:56:45 Reassigning the id of job test_20190214_narnaud_9_scan_logfiles from (285360.0.0) to (285360.0.0)
02/14/19 14:56:45 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 14:56:45}
02/14/19 14:56:45 Number of idle job procs: 8
02/14/19 14:56:45 Reassigning the id of job test_20190214_narnaud_9_decode_DMS_snapshots from (285361.0.0) to (285361.0.0)
02/14/19 14:56:45 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_decode_DMS_snapshots (285361.0.0) {02/14/19 14:56:45}
02/14/19 14:56:45 Number of idle job procs: 9
02/14/19 14:56:45 Reassigning the id of job test_20190214_narnaud_9_upv from (285362.0.0) to (285362.0.0)
02/14/19 14:56:45 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_upv (285362.0.0) {02/14/19 14:56:45}
02/14/19 14:56:45 Number of idle job procs: 10
02/14/19 14:56:45 DAG status: 0 (DAG_STATUS_OK)
02/14/19 14:56:45 Of 38 nodes total:
02/14/19 14:56:45  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:56:45   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:56:45     3       0       12       0       6         17        0
02/14/19 14:56:45 0 job proc(s) currently held
02/14/19 14:56:50 Submitting HTCondor Node test_20190214_narnaud_9_bruco job(s)...
02/14/19 14:56:50 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:50 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:50 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:50 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_bruco -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_bruco -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" bruco.sub
02/14/19 14:56:50 From submit: Submitting job(s).
02/14/19 14:56:50 From submit: 1 job(s) submitted to cluster 285363.
02/14/19 14:56:50 	assigned HTCondor ID (285363.0.0)
02/14/19 14:56:50 Submitting HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ job(s)...
02/14/19 14:56:50 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:50 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:50 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:50 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_data_ref_comparison_INJ -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_data_ref_comparison_INJ -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" data_ref_comparison_INJ.sub
02/14/19 14:56:50 From submit: Submitting job(s).
02/14/19 14:56:50 From submit: 1 job(s) submitted to cluster 285364.
02/14/19 14:56:50 	assigned HTCondor ID (285364.0.0)
02/14/19 14:56:50 Submitting HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC job(s)...
02/14/19 14:56:50 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:50 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:50 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:50 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_data_ref_comparison_ISC -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_data_ref_comparison_ISC -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" data_ref_comparison_ISC.sub
02/14/19 14:56:50 From submit: Submitting job(s).
02/14/19 14:56:50 From submit: 1 job(s) submitted to cluster 285365.
02/14/19 14:56:50 	assigned HTCondor ID (285365.0.0)
02/14/19 14:56:50 Submitting HTCondor Node test_20190214_narnaud_9_generate_dqr_json job(s)...
02/14/19 14:56:50 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:50 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:50 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:50 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_generate_dqr_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_generate_dqr_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"" generate_dqr_json.sub
02/14/19 14:56:50 From submit: Submitting job(s).
02/14/19 14:56:50 From submit: 1 job(s) submitted to cluster 285366.
02/14/19 14:56:50 	assigned HTCondor ID (285366.0.0)
02/14/19 14:56:50 Submitting HTCondor Node test_20190214_narnaud_9_virgo_noise_json job(s)...
02/14/19 14:56:50 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:50 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:50 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:50 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_virgo_noise_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_virgo_noise_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_virgo_noise" virgo_noise_json.sub
02/14/19 14:56:50 From submit: Submitting job(s).
02/14/19 14:56:50 From submit: 1 job(s) submitted to cluster 285367.
02/14/19 14:56:50 	assigned HTCondor ID (285367.0.0)
02/14/19 14:56:50 Just submitted 5 jobs this cycle...
02/14/19 14:56:50 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:56:50 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon (285335.0.0) {02/14/19 14:56:46}
02/14/19 14:56:50 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon (285335.0.0) {02/14/19 14:56:46}
02/14/19 14:56:50 Number of idle job procs: 10
02/14/19 14:56:50 Node test_20190214_narnaud_9_dqprint_brmsmon job proc (285335.0.0) completed successfully.
02/14/19 14:56:50 Node test_20190214_narnaud_9_dqprint_brmsmon job completed
02/14/19 14:56:50 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_virgo_status (285334.0.0) {02/14/19 14:56:48}
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_query_ingv_public_data (285359.0.0) {02/14/19 14:56:49}
02/14/19 14:56:50 Number of idle job procs: 9
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 14:56:49}
02/14/19 14:56:50 Number of idle job procs: 8
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanfull512 (285351.0.0) {02/14/19 14:56:49}
02/14/19 14:56:50 Number of idle job procs: 7
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1 (285348.0.0) {02/14/19 14:56:49}
02/14/19 14:56:50 Number of idle job procs: 6
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1 (285349.0.0) {02/14/19 14:56:49}
02/14/19 14:56:50 Number of idle job procs: 5
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1 (285347.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 4
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_decode_DMS_snapshots (285361.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 3
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_upv (285362.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 2
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronplot (285358.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 1
02/14/19 14:56:50 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 0
02/14/19 14:56:50 Reassigning the id of job test_20190214_narnaud_9_bruco from (285363.0.0) to (285363.0.0)
02/14/19 14:56:50 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_bruco (285363.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 1
02/14/19 14:56:50 Reassigning the id of job test_20190214_narnaud_9_data_ref_comparison_INJ from (285364.0.0) to (285364.0.0)
02/14/19 14:56:50 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ (285364.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 2
02/14/19 14:56:50 Reassigning the id of job test_20190214_narnaud_9_data_ref_comparison_ISC from (285365.0.0) to (285365.0.0)
02/14/19 14:56:50 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC (285365.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 3
02/14/19 14:56:50 Reassigning the id of job test_20190214_narnaud_9_generate_dqr_json from (285366.0.0) to (285366.0.0)
02/14/19 14:56:50 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_generate_dqr_json (285366.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 4
02/14/19 14:56:50 Reassigning the id of job test_20190214_narnaud_9_virgo_noise_json from (285367.0.0) to (285367.0.0)
02/14/19 14:56:50 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_virgo_noise_json (285367.0.0) {02/14/19 14:56:50}
02/14/19 14:56:50 Number of idle job procs: 5
02/14/19 14:56:50 DAG status: 0 (DAG_STATUS_OK)
02/14/19 14:56:50 Of 38 nodes total:
02/14/19 14:56:50  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:56:50   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:56:50     4       0       16       0       2         16        0
02/14/19 14:56:50 0 job proc(s) currently held
02/14/19 14:56:55 Submitting HTCondor Node test_20190214_narnaud_9_dqprint_dqflags_json job(s)...
02/14/19 14:56:55 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:55 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:55 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:55 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_dqprint_dqflags_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_dqprint_dqflags_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_dqprint_dqflags" dqprint_dqflags_json.sub
02/14/19 14:56:55 From submit: Submitting job(s).
02/14/19 14:56:55 From submit: 1 job(s) submitted to cluster 285369.
02/14/19 14:56:55 	assigned HTCondor ID (285369.0.0)
02/14/19 14:56:55 Submitting HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon_json job(s)...
02/14/19 14:56:55 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:56:55 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:56:55 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:56:55 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_dqprint_brmsmon_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_dqprint_brmsmon_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_dqprint_brmsmon" dqprint_brmsmon_json.sub
02/14/19 14:56:55 From submit: Submitting job(s).
02/14/19 14:56:55 From submit: 1 job(s) submitted to cluster 285371.
02/14/19 14:56:55 	assigned HTCondor ID (285371.0.0)
02/14/19 14:56:55 Just submitted 2 jobs this cycle...
02/14/19 14:56:55 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:56:55 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_bruco (285363.0.0) {02/14/19 14:56:50}
02/14/19 14:56:55 Number of idle job procs: 4
02/14/19 14:56:55 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco (285363.0.0) {02/14/19 14:56:50}
02/14/19 14:56:55 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_bruco (285363.0.0) {02/14/19 14:56:50}
02/14/19 14:56:55 Number of idle job procs: 4
02/14/19 14:56:55 Node test_20190214_narnaud_9_bruco job proc (285363.0.0) completed successfully.
02/14/19 14:56:55 Node test_20190214_narnaud_9_bruco job completed
02/14/19 14:56:55 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_generate_dqr_json (285366.0.0) {02/14/19 14:56:50}
02/14/19 14:56:55 Number of idle job procs: 3
02/14/19 14:56:55 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ (285364.0.0) {02/14/19 14:56:51}
02/14/19 14:56:55 Number of idle job procs: 2
02/14/19 14:56:55 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC (285365.0.0) {02/14/19 14:56:51}
02/14/19 14:56:55 Number of idle job procs: 1
02/14/19 14:56:55 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC (285365.0.0) {02/14/19 14:56:51}
02/14/19 14:56:55 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC (285365.0.0) {02/14/19 14:56:51}
02/14/19 14:56:55 Number of idle job procs: 1
02/14/19 14:56:55 Node test_20190214_narnaud_9_data_ref_comparison_ISC job proc (285365.0.0) completed successfully.
02/14/19 14:56:55 Node test_20190214_narnaud_9_data_ref_comparison_ISC job completed
02/14/19 14:56:55 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_virgo_noise_json (285367.0.0) {02/14/19 14:56:53}
02/14/19 14:56:55 Number of idle job procs: 0
02/14/19 14:56:55 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_virgo_noise_json (285367.0.0) {02/14/19 14:56:53}
02/14/19 14:56:55 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_virgo_noise_json (285367.0.0) {02/14/19 14:56:53}
02/14/19 14:56:55 Number of idle job procs: 0
02/14/19 14:56:55 Node test_20190214_narnaud_9_virgo_noise_json job proc (285367.0.0) completed successfully.
02/14/19 14:56:55 Node test_20190214_narnaud_9_virgo_noise_json job completed
02/14/19 14:56:55 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_generate_dqr_json (285366.0.0) {02/14/19 14:56:53}
02/14/19 14:56:55 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_generate_dqr_json (285366.0.0) {02/14/19 14:56:53}
02/14/19 14:56:55 Number of idle job procs: 0
02/14/19 14:56:55 Node test_20190214_narnaud_9_generate_dqr_json job proc (285366.0.0) completed successfully.
02/14/19 14:56:55 Node test_20190214_narnaud_9_generate_dqr_json job completed
02/14/19 14:56:55 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ (285364.0.0) {02/14/19 14:56:53}
02/14/19 14:56:55 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ (285364.0.0) {02/14/19 14:56:54}
02/14/19 14:56:55 Number of idle job procs: 0
02/14/19 14:56:55 Node test_20190214_narnaud_9_data_ref_comparison_INJ job proc (285364.0.0) completed successfully.
02/14/19 14:56:55 Node test_20190214_narnaud_9_data_ref_comparison_INJ job completed
02/14/19 14:56:55 Reassigning the id of job test_20190214_narnaud_9_dqprint_dqflags_json from (285369.0.0) to (285369.0.0)
02/14/19 14:56:55 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_dqprint_dqflags_json (285369.0.0) {02/14/19 14:56:55}
02/14/19 14:56:55 Number of idle job procs: 1
02/14/19 14:56:55 Reassigning the id of job test_20190214_narnaud_9_dqprint_brmsmon_json from (285371.0.0) to (285371.0.0)
02/14/19 14:56:55 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon_json (285371.0.0) {02/14/19 14:56:55}
02/14/19 14:56:55 Number of idle job procs: 2
02/14/19 14:56:55 DAG status: 0 (DAG_STATUS_OK)
02/14/19 14:56:55 Of 38 nodes total:
02/14/19 14:56:55  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:56:55   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:56:55     9       0       13       0       6         10        0
02/14/19 14:56:55 0 job proc(s) currently held
02/14/19 14:57:01 Submitting HTCondor Node test_20190214_narnaud_9_bruco_std job(s)...
02/14/19 14:57:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:57:01 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:57:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:57:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_bruco_std -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_bruco_std -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_bruco" bruco_std.sub
02/14/19 14:57:01 From submit: Submitting job(s).
02/14/19 14:57:01 From submit: 1 job(s) submitted to cluster 285381.
02/14/19 14:57:01 	assigned HTCondor ID (285381.0.0)
02/14/19 14:57:01 Submitting HTCondor Node test_20190214_narnaud_9_bruco_std-prev job(s)...
02/14/19 14:57:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:57:01 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:57:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:57:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_bruco_std-prev -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_bruco_std-prev -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_bruco" bruco_std-prev.sub
02/14/19 14:57:01 From submit: Submitting job(s).
02/14/19 14:57:01 From submit: 1 job(s) submitted to cluster 285382.
02/14/19 14:57:01 	assigned HTCondor ID (285382.0.0)
02/14/19 14:57:01 Submitting HTCondor Node test_20190214_narnaud_9_bruco_env job(s)...
02/14/19 14:57:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:57:01 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:57:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:57:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_bruco_env -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_bruco_env -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_bruco" bruco_env.sub
02/14/19 14:57:01 From submit: Submitting job(s).
02/14/19 14:57:01 From submit: 1 job(s) submitted to cluster 285383.
02/14/19 14:57:01 	assigned HTCondor ID (285383.0.0)
02/14/19 14:57:01 Submitting HTCondor Node test_20190214_narnaud_9_bruco_env-prev job(s)...
02/14/19 14:57:01 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:57:01 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:57:01 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:57:01 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_bruco_env-prev -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_bruco_env-prev -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_bruco" bruco_env-prev.sub
02/14/19 14:57:02 From submit: Submitting job(s).
02/14/19 14:57:02 From submit: 1 job(s) submitted to cluster 285385.
02/14/19 14:57:02 	assigned HTCondor ID (285385.0.0)
02/14/19 14:57:02 Submitting HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC_comparison job(s)...
02/14/19 14:57:02 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:57:02 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:57:02 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:57:02 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_data_ref_comparison_ISC_comparison -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_data_ref_comparison_ISC_comparison -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_data_ref_comparison_ISC" data_ref_comparison_ISC_comparison.sub
02/14/19 14:57:02 From submit: Submitting job(s).
02/14/19 14:57:02 From submit: 1 job(s) submitted to cluster 285387.
02/14/19 14:57:03 	assigned HTCondor ID (285387.0.0)
02/14/19 14:57:03 Just submitted 5 jobs this cycle...
02/14/19 14:57:03 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:57:03 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_dqprint_dqflags_json (285369.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Number of idle job procs: 1
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronplot (285358.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronplot (285358.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Number of idle job procs: 1
02/14/19 14:57:03 Node test_20190214_narnaud_9_omicronplot job proc (285358.0.0) completed successfully.
02/14/19 14:57:03 Node test_20190214_narnaud_9_omicronplot job completed
02/14/19 14:57:03 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon_json (285371.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Number of idle job procs: 0
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_upv (285362.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_upv (285362.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Number of idle job procs: 0
02/14/19 14:57:03 Node test_20190214_narnaud_9_upv job proc (285362.0.0) completed successfully.
02/14/19 14:57:03 Node test_20190214_narnaud_9_upv job completed
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_dqprint_dqflags_json (285369.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_dqprint_dqflags_json (285369.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Number of idle job procs: 0
02/14/19 14:57:03 Node test_20190214_narnaud_9_dqprint_dqflags_json job proc (285369.0.0) completed successfully.
02/14/19 14:57:03 Node test_20190214_narnaud_9_dqprint_dqflags_json job completed
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon_json (285371.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_dqprint_brmsmon_json (285371.0.0) {02/14/19 14:56:56}
02/14/19 14:57:03 Number of idle job procs: 0
02/14/19 14:57:03 Node test_20190214_narnaud_9_dqprint_brmsmon_json job proc (285371.0.0) completed successfully.
02/14/19 14:57:03 Node test_20190214_narnaud_9_dqprint_brmsmon_json job completed
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_query_ingv_public_data (285359.0.0) {02/14/19 14:56:57}
02/14/19 14:57:03 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_query_ingv_public_data (285359.0.0) {02/14/19 14:56:57}
02/14/19 14:57:03 Number of idle job procs: 0
02/14/19 14:57:03 Node test_20190214_narnaud_9_query_ingv_public_data job proc (285359.0.0) completed successfully.
02/14/19 14:57:03 Node test_20190214_narnaud_9_query_ingv_public_data job completed
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1 (285347.0.0) {02/14/19 14:56:58}
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_decode_DMS_snapshots (285361.0.0) {02/14/19 14:56:58}
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 14:56:58}
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1 (285348.0.0) {02/14/19 14:56:58}
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull512 (285351.0.0) {02/14/19 14:56:58}
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 14:56:58}
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1 (285349.0.0) {02/14/19 14:56:58}
02/14/19 14:57:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_decode_DMS_snapshots (285361.0.0) {02/14/19 14:57:00}
02/14/19 14:57:03 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_decode_DMS_snapshots (285361.0.0) {02/14/19 14:57:00}
02/14/19 14:57:03 Number of idle job procs: 0
02/14/19 14:57:03 Node test_20190214_narnaud_9_decode_DMS_snapshots job proc (285361.0.0) completed successfully.
02/14/19 14:57:03 Node test_20190214_narnaud_9_decode_DMS_snapshots job completed
02/14/19 14:57:03 Reassigning the id of job test_20190214_narnaud_9_bruco_std from (285381.0.0) to (285381.0.0)
02/14/19 14:57:03 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_bruco_std (285381.0.0) {02/14/19 14:57:01}
02/14/19 14:57:03 Number of idle job procs: 1
02/14/19 14:57:03 Reassigning the id of job test_20190214_narnaud_9_bruco_std-prev from (285382.0.0) to (285382.0.0)
02/14/19 14:57:03 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_bruco_std-prev (285382.0.0) {02/14/19 14:57:01}
02/14/19 14:57:03 Number of idle job procs: 2
02/14/19 14:57:03 Reassigning the id of job test_20190214_narnaud_9_bruco_env from (285383.0.0) to (285383.0.0)
02/14/19 14:57:03 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 14:57:01}
02/14/19 14:57:03 Number of idle job procs: 3
02/14/19 14:57:03 Reassigning the id of job test_20190214_narnaud_9_bruco_env-prev from (285385.0.0) to (285385.0.0)
02/14/19 14:57:03 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_bruco_env-prev (285385.0.0) {02/14/19 14:57:02}
02/14/19 14:57:03 Number of idle job procs: 4
02/14/19 14:57:03 Reassigning the id of job test_20190214_narnaud_9_data_ref_comparison_ISC_comparison from (285387.0.0) to (285387.0.0)
02/14/19 14:57:03 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC_comparison (285387.0.0) {02/14/19 14:57:02}
02/14/19 14:57:03 Number of idle job procs: 5
02/14/19 14:57:03 DAG status: 0 (DAG_STATUS_OK)
02/14/19 14:57:03 Of 38 nodes total:
02/14/19 14:57:03  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:57:03   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:57:03    15       0       12       0       3          8        0
02/14/19 14:57:03 0 job proc(s) currently held
02/14/19 14:57:08 Submitting HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ_comparison job(s)...
02/14/19 14:57:08 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:57:08 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:57:08 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:57:08 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_data_ref_comparison_INJ_comparison -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_data_ref_comparison_INJ_comparison -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_data_ref_comparison_INJ" data_ref_comparison_INJ_comparison.sub
02/14/19 14:57:08 From submit: Submitting job(s).
02/14/19 14:57:08 From submit: 1 job(s) submitted to cluster 285393.
02/14/19 14:57:08 	assigned HTCondor ID (285393.0.0)
02/14/19 14:57:08 Submitting HTCondor Node test_20190214_narnaud_9_omicronplot_exe job(s)...
02/14/19 14:57:08 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:57:08 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:57:08 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:57:08 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronplot_exe -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronplot_exe -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_omicronplot" omicronplot_exe.sub
02/14/19 14:57:08 From submit: Submitting job(s).
02/14/19 14:57:08 From submit: 1 job(s) submitted to cluster 285394.
02/14/19 14:57:08 	assigned HTCondor ID (285394.0.0)
02/14/19 14:57:08 Submitting HTCondor Node test_20190214_narnaud_9_upv_exe job(s)...
02/14/19 14:57:08 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:57:08 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:57:08 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:57:08 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_upv_exe -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_upv_exe -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '0 -a FAILED_COUNT' '=' '0 -a +KeepClaimIdle' '=' '20 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_upv" upv_exe.sub
02/14/19 14:57:08 From submit: Submitting job(s).
02/14/19 14:57:08 From submit: 1 job(s) submitted to cluster 285395.
02/14/19 14:57:08 	assigned HTCondor ID (285395.0.0)
02/14/19 14:57:08 Just submitted 3 jobs this cycle...
02/14/19 14:57:08 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:57:08 Reassigning the id of job test_20190214_narnaud_9_data_ref_comparison_INJ_comparison from (285393.0.0) to (285393.0.0)
02/14/19 14:57:08 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ_comparison (285393.0.0) {02/14/19 14:57:08}
02/14/19 14:57:08 Number of idle job procs: 6
02/14/19 14:57:08 Reassigning the id of job test_20190214_narnaud_9_omicronplot_exe from (285394.0.0) to (285394.0.0)
02/14/19 14:57:08 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronplot_exe (285394.0.0) {02/14/19 14:57:08}
02/14/19 14:57:08 Number of idle job procs: 7
02/14/19 14:57:08 Reassigning the id of job test_20190214_narnaud_9_upv_exe from (285395.0.0) to (285395.0.0)
02/14/19 14:57:08 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_upv_exe (285395.0.0) {02/14/19 14:57:08}
02/14/19 14:57:08 Number of idle job procs: 8
02/14/19 14:57:08 DAG status: 0 (DAG_STATUS_OK)
02/14/19 14:57:08 Of 38 nodes total:
02/14/19 14:57:08  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:57:08   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:57:08    15       0       15       0       0          8        0
02/14/19 14:57:08 0 job proc(s) currently held
02/14/19 14:57:13 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:57:13 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC_comparison (285387.0.0) {02/14/19 14:57:10}
02/14/19 14:57:13 Number of idle job procs: 7
02/14/19 14:57:13 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_upv_exe (285395.0.0) {02/14/19 14:57:10}
02/14/19 14:57:13 Number of idle job procs: 6
02/14/19 14:57:13 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronplot_exe (285394.0.0) {02/14/19 14:57:10}
02/14/19 14:57:13 Number of idle job procs: 5
02/14/19 14:57:13 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 14:57:10}
02/14/19 14:57:13 Number of idle job procs: 4
02/14/19 14:57:13 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_bruco_env-prev (285385.0.0) {02/14/19 14:57:10}
02/14/19 14:57:13 Number of idle job procs: 3
02/14/19 14:57:13 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ_comparison (285393.0.0) {02/14/19 14:57:10}
02/14/19 14:57:13 Number of idle job procs: 2
02/14/19 14:57:13 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_bruco_std (285381.0.0) {02/14/19 14:57:10}
02/14/19 14:57:13 Number of idle job procs: 1
02/14/19 14:57:13 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_bruco_std-prev (285382.0.0) {02/14/19 14:57:10}
02/14/19 14:57:13 Number of idle job procs: 0
02/14/19 14:57:18 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:57:18 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronplot_exe (285394.0.0) {02/14/19 14:57:15}
02/14/19 14:57:18 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronplot_exe (285394.0.0) {02/14/19 14:57:15}
02/14/19 14:57:18 Number of idle job procs: 0
02/14/19 14:57:18 Node test_20190214_narnaud_9_omicronplot_exe job proc (285394.0.0) failed with status 2.
02/14/19 14:57:18 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_upv_exe (285395.0.0) {02/14/19 14:57:18}
02/14/19 14:57:18 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC_comparison (285387.0.0) {02/14/19 14:57:18}
02/14/19 14:57:18 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 14:57:18}
02/14/19 14:57:18 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ_comparison (285393.0.0) {02/14/19 14:57:18}
02/14/19 14:57:18 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 14:57:18 Of 38 nodes total:
02/14/19 14:57:18  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:57:18   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:57:18    15       0       14       0       0          8        1
02/14/19 14:57:18 0 job proc(s) currently held
02/14/19 14:57:23 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:57:23 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env-prev (285385.0.0) {02/14/19 14:57:18}
02/14/19 14:57:23 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_std-prev (285382.0.0) {02/14/19 14:57:18}
02/14/19 14:57:23 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_std (285381.0.0) {02/14/19 14:57:18}
02/14/19 14:57:38 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:57:38 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_virgo_status (285334.0.0) {02/14/19 14:57:37}
02/14/19 14:57:38 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_virgo_status (285334.0.0) {02/14/19 14:57:37}
02/14/19 14:57:38 Number of idle job procs: 0
02/14/19 14:57:38 Node test_20190214_narnaud_9_virgo_status job proc (285334.0.0) completed successfully.
02/14/19 14:57:38 Node test_20190214_narnaud_9_virgo_status job completed
02/14/19 14:57:38 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 14:57:38 Of 38 nodes total:
02/14/19 14:57:38  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:57:38   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:57:38    16       0       13       0       0          8        1
02/14/19 14:57:38 0 job proc(s) currently held
02/14/19 14:58:08 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:58:08 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1 (285347.0.0) {02/14/19 14:58:05}
02/14/19 14:58:08 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1 (285347.0.0) {02/14/19 14:58:05}
02/14/19 14:58:08 Number of idle job procs: 0
02/14/19 14:58:08 Node test_20190214_narnaud_9_omicronscanhoftV1 job proc (285347.0.0) completed successfully.
02/14/19 14:58:08 Node test_20190214_narnaud_9_omicronscanhoftV1 job completed
02/14/19 14:58:08 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 14:58:08 Of 38 nodes total:
02/14/19 14:58:08  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:58:08   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:58:08    17       0       12       0       1          7        1
02/14/19 14:58:08 0 job proc(s) currently held
02/14/19 14:58:13 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1_json job(s)...
02/14/19 14:58:13 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 14:58:13 Masking the events recorded in the DAGMAN workflow log
02/14/19 14:58:13 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 14:58:13 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanhoftV1_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanhoftV1_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '2 -a FAILED_COUNT' '=' '1 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_omicronscanhoftV1" omicronscanhoftV1_json.sub
02/14/19 14:58:13 From submit: Submitting job(s).
02/14/19 14:58:13 From submit: 1 job(s) submitted to cluster 285419.
02/14/19 14:58:13 	assigned HTCondor ID (285419.0.0)
02/14/19 14:58:13 Just submitted 1 job this cycle...
02/14/19 14:58:13 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 14:58:13 Of 38 nodes total:
02/14/19 14:58:13  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:58:13   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:58:13    17       0       13       0       0          7        1
02/14/19 14:58:13 0 job proc(s) currently held
02/14/19 14:58:18 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:58:18 Reassigning the id of job test_20190214_narnaud_9_omicronscanhoftV1_json from (285419.0.0) to (285419.0.0)
02/14/19 14:58:18 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1_json (285419.0.0) {02/14/19 14:58:13}
02/14/19 14:58:18 Number of idle job procs: 1
02/14/19 14:58:18 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1_json (285419.0.0) {02/14/19 14:58:13}
02/14/19 14:58:18 Number of idle job procs: 0
02/14/19 14:58:18 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1_json (285419.0.0) {02/14/19 14:58:15}
02/14/19 14:58:18 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanhoftV1_json (285419.0.0) {02/14/19 14:58:15}
02/14/19 14:58:18 Number of idle job procs: 0
02/14/19 14:58:18 Node test_20190214_narnaud_9_omicronscanhoftV1_json job proc (285419.0.0) completed successfully.
02/14/19 14:58:18 Node test_20190214_narnaud_9_omicronscanhoftV1_json job completed
02/14/19 14:58:18 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ_comparison (285393.0.0) {02/14/19 14:58:18}
02/14/19 14:58:18 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_INJ_comparison (285393.0.0) {02/14/19 14:58:18}
02/14/19 14:58:18 Number of idle job procs: 0
02/14/19 14:58:18 Node test_20190214_narnaud_9_data_ref_comparison_INJ_comparison job proc (285393.0.0) completed successfully.
02/14/19 14:58:18 Node test_20190214_narnaud_9_data_ref_comparison_INJ_comparison job completed
02/14/19 14:58:18 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 14:58:18 Of 38 nodes total:
02/14/19 14:58:18  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:58:18   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:58:18    19       0       11       0       0          7        1
02/14/19 14:58:18 0 job proc(s) currently held
02/14/19 14:59:44 Currently monitoring 1 HTCondor log file(s)
02/14/19 14:59:44 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC_comparison (285387.0.0) {02/14/19 14:59:42}
02/14/19 14:59:44 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_data_ref_comparison_ISC_comparison (285387.0.0) {02/14/19 14:59:42}
02/14/19 14:59:44 Number of idle job procs: 0
02/14/19 14:59:44 Node test_20190214_narnaud_9_data_ref_comparison_ISC_comparison job proc (285387.0.0) completed successfully.
02/14/19 14:59:44 Node test_20190214_narnaud_9_data_ref_comparison_ISC_comparison job completed
02/14/19 14:59:44 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 14:59:44 Of 38 nodes total:
02/14/19 14:59:44  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 14:59:44   ===     ===      ===     ===     ===        ===      ===
02/14/19 14:59:44    20       0       10       0       0          7        1
02/14/19 14:59:44 0 job proc(s) currently held
02/14/19 15:01:24 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:01:24 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1 (285348.0.0) {02/14/19 15:01:20}
02/14/19 15:01:24 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1 (285348.0.0) {02/14/19 15:01:20}
02/14/19 15:01:24 Number of idle job procs: 0
02/14/19 15:01:24 Node test_20190214_narnaud_9_omicronscanhoftH1 job proc (285348.0.0) completed successfully.
02/14/19 15:01:24 Node test_20190214_narnaud_9_omicronscanhoftH1 job completed
02/14/19 15:01:24 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:01:24 Of 38 nodes total:
02/14/19 15:01:24  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:01:24   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:01:24    21       0        9       0       1          6        1
02/14/19 15:01:24 0 job proc(s) currently held
02/14/19 15:01:29 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1_json job(s)...
02/14/19 15:01:29 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 15:01:29 Masking the events recorded in the DAGMAN workflow log
02/14/19 15:01:29 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 15:01:29 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanhoftH1_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanhoftH1_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '2 -a FAILED_COUNT' '=' '1 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_omicronscanhoftH1" omicronscanhoftH1_json.sub
02/14/19 15:01:29 From submit: Submitting job(s).
02/14/19 15:01:29 From submit: 1 job(s) submitted to cluster 285426.
02/14/19 15:01:29 	assigned HTCondor ID (285426.0.0)
02/14/19 15:01:29 Just submitted 1 job this cycle...
02/14/19 15:01:29 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:01:29 Of 38 nodes total:
02/14/19 15:01:29  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:01:29   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:01:29    21       0       10       0       0          6        1
02/14/19 15:01:29 0 job proc(s) currently held
02/14/19 15:01:34 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:01:34 Reassigning the id of job test_20190214_narnaud_9_omicronscanhoftH1_json from (285426.0.0) to (285426.0.0)
02/14/19 15:01:34 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1_json (285426.0.0) {02/14/19 15:01:29}
02/14/19 15:01:34 Number of idle job procs: 1
02/14/19 15:01:34 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1_json (285426.0.0) {02/14/19 15:01:29}
02/14/19 15:01:34 Number of idle job procs: 0
02/14/19 15:01:34 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1_json (285426.0.0) {02/14/19 15:01:30}
02/14/19 15:01:34 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanhoftH1_json (285426.0.0) {02/14/19 15:01:30}
02/14/19 15:01:34 Number of idle job procs: 0
02/14/19 15:01:34 Node test_20190214_narnaud_9_omicronscanhoftH1_json job proc (285426.0.0) completed successfully.
02/14/19 15:01:34 Node test_20190214_narnaud_9_omicronscanhoftH1_json job completed
02/14/19 15:01:34 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:01:34 Of 38 nodes total:
02/14/19 15:01:34  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:01:34   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:01:34    22       0        9       0       0          6        1
02/14/19 15:01:34 0 job proc(s) currently held
02/14/19 15:02:00 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:02:00 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 15:01:58}
02/14/19 15:02:00 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:01:59}
02/14/19 15:02:00 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1 (285349.0.0) {02/14/19 15:01:59}
02/14/19 15:02:00 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull512 (285351.0.0) {02/14/19 15:01:59}
02/14/19 15:02:20 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:02:20 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_upv_exe (285395.0.0) {02/14/19 15:02:18}
02/14/19 15:02:20 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_std-prev (285382.0.0) {02/14/19 15:02:18}
02/14/19 15:02:20 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env-prev (285385.0.0) {02/14/19 15:02:18}
02/14/19 15:02:20 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_std (285381.0.0) {02/14/19 15:02:18}
02/14/19 15:02:20 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:02:18}
02/14/19 15:02:20 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1 (285349.0.0) {02/14/19 15:02:18}
02/14/19 15:02:20 Number of idle job procs: 0
02/14/19 15:02:20 Node test_20190214_narnaud_9_omicronscanhoftL1 job proc (285349.0.0) completed successfully.
02/14/19 15:02:20 Node test_20190214_narnaud_9_omicronscanhoftL1 job completed
02/14/19 15:02:20 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:02:20 Of 38 nodes total:
02/14/19 15:02:20  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:02:20   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:02:20    23       0        8       0       1          5        1
02/14/19 15:02:20 0 job proc(s) currently held
02/14/19 15:02:25 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1_json job(s)...
02/14/19 15:02:25 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 15:02:25 Masking the events recorded in the DAGMAN workflow log
02/14/19 15:02:25 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 15:02:25 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanhoftL1_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanhoftL1_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '2 -a FAILED_COUNT' '=' '1 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_omicronscanhoftL1" omicronscanhoftL1_json.sub
02/14/19 15:02:25 From submit: Submitting job(s).
02/14/19 15:02:25 From submit: 1 job(s) submitted to cluster 285431.
02/14/19 15:02:25 	assigned HTCondor ID (285431.0.0)
02/14/19 15:02:25 Just submitted 1 job this cycle...
02/14/19 15:02:25 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:02:25 Of 38 nodes total:
02/14/19 15:02:25  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:02:25   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:02:25    23       0        9       0       0          5        1
02/14/19 15:02:25 0 job proc(s) currently held
02/14/19 15:02:30 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:02:30 Reassigning the id of job test_20190214_narnaud_9_omicronscanhoftL1_json from (285431.0.0) to (285431.0.0)
02/14/19 15:02:30 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1_json (285431.0.0) {02/14/19 15:02:25}
02/14/19 15:02:30 Number of idle job procs: 1
02/14/19 15:02:30 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1_json (285431.0.0) {02/14/19 15:02:25}
02/14/19 15:02:30 Number of idle job procs: 0
02/14/19 15:02:30 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1_json (285431.0.0) {02/14/19 15:02:26}
02/14/19 15:02:30 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanhoftL1_json (285431.0.0) {02/14/19 15:02:26}
02/14/19 15:02:30 Number of idle job procs: 0
02/14/19 15:02:30 Node test_20190214_narnaud_9_omicronscanhoftL1_json job proc (285431.0.0) completed successfully.
02/14/19 15:02:30 Node test_20190214_narnaud_9_omicronscanhoftL1_json job completed
02/14/19 15:02:30 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:02:30 Of 38 nodes total:
02/14/19 15:02:30  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:02:30   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:02:30    24       0        8       0       0          5        1
02/14/19 15:02:30 0 job proc(s) currently held
02/14/19 15:04:06 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:04:06 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env-prev (285385.0.0) {02/14/19 15:04:02}
02/14/19 15:04:06 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_bruco_env-prev (285385.0.0) {02/14/19 15:04:02}
02/14/19 15:04:06 Number of idle job procs: 0
02/14/19 15:04:06 Node test_20190214_narnaud_9_bruco_env-prev job proc (285385.0.0) completed successfully.
02/14/19 15:04:06 Node test_20190214_narnaud_9_bruco_env-prev job completed
02/14/19 15:04:06 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:04:06 Of 38 nodes total:
02/14/19 15:04:06  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:04:06   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:04:06    25       0        7       0       0          5        1
02/14/19 15:04:06 0 job proc(s) currently held
02/14/19 15:04:36 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:04:36 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_upv_exe (285395.0.0) {02/14/19 15:04:33}
02/14/19 15:04:36 Number of idle job procs: 0
02/14/19 15:04:36 Node test_20190214_narnaud_9_upv_exe job proc (285395.0.0) completed successfully.
02/14/19 15:04:36 Node test_20190214_narnaud_9_upv_exe job completed
02/14/19 15:04:36 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:04:36 Of 38 nodes total:
02/14/19 15:04:36  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:04:36   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:04:36    26       0        6       0       1          4        1
02/14/19 15:04:36 0 job proc(s) currently held
02/14/19 15:04:41 Submitting HTCondor Node test_20190214_narnaud_9_upv_json job(s)...
02/14/19 15:04:41 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 15:04:41 Masking the events recorded in the DAGMAN workflow log
02/14/19 15:04:41 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 15:04:41 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_upv_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_upv_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '2 -a FAILED_COUNT' '=' '1 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_upv_exe" upv_json.sub
02/14/19 15:04:41 From submit: Submitting job(s).
02/14/19 15:04:41 From submit: 1 job(s) submitted to cluster 285433.
02/14/19 15:04:41 	assigned HTCondor ID (285433.0.0)
02/14/19 15:04:41 Just submitted 1 job this cycle...
02/14/19 15:04:41 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:04:41 Of 38 nodes total:
02/14/19 15:04:41  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:04:41   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:04:41    26       0        7       0       0          4        1
02/14/19 15:04:41 0 job proc(s) currently held
02/14/19 15:04:46 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:04:46 Reassigning the id of job test_20190214_narnaud_9_upv_json from (285433.0.0) to (285433.0.0)
02/14/19 15:04:46 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_upv_json (285433.0.0) {02/14/19 15:04:41}
02/14/19 15:04:46 Number of idle job procs: 1
02/14/19 15:04:46 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_upv_json (285433.0.0) {02/14/19 15:04:41}
02/14/19 15:04:46 Number of idle job procs: 0
02/14/19 15:04:46 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_upv_json (285433.0.0) {02/14/19 15:04:43}
02/14/19 15:04:46 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_upv_json (285433.0.0) {02/14/19 15:04:43}
02/14/19 15:04:46 Number of idle job procs: 0
02/14/19 15:04:46 Node test_20190214_narnaud_9_upv_json job proc (285433.0.0) completed successfully.
02/14/19 15:04:46 Node test_20190214_narnaud_9_upv_json job completed
02/14/19 15:04:46 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:04:46 Of 38 nodes total:
02/14/19 15:04:46  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:04:46   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:04:46    27       0        6       0       0          4        1
02/14/19 15:04:46 0 job proc(s) currently held
02/14/19 15:07:01 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:07:01 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:06:59}
02/14/19 15:07:01 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull512 (285351.0.0) {02/14/19 15:06:59}
02/14/19 15:07:01 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 15:06:59}
02/14/19 15:07:21 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:07:21 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_std (285381.0.0) {02/14/19 15:07:19}
02/14/19 15:07:21 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:07:19}
02/14/19 15:12:02 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:12:02 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:11:59}
02/14/19 15:12:02 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull512 (285351.0.0) {02/14/19 15:12:00}
02/14/19 15:12:22 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:12:22 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:12:19}
02/14/19 15:16:28 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:16:28 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull512 (285351.0.0) {02/14/19 15:16:25}
02/14/19 15:16:28 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanfull512 (285351.0.0) {02/14/19 15:16:25}
02/14/19 15:16:28 Number of idle job procs: 0
02/14/19 15:16:28 Node test_20190214_narnaud_9_omicronscanfull512 job proc (285351.0.0) completed successfully.
02/14/19 15:16:28 Node test_20190214_narnaud_9_omicronscanfull512 job completed
02/14/19 15:16:28 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:16:28 Of 38 nodes total:
02/14/19 15:16:28  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:16:28   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:16:28    28       0        5       0       1          3        1
02/14/19 15:16:28 0 job proc(s) currently held
02/14/19 15:16:33 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanfull512_json job(s)...
02/14/19 15:16:33 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 15:16:33 Masking the events recorded in the DAGMAN workflow log
02/14/19 15:16:33 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 15:16:33 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanfull512_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanfull512_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '2 -a FAILED_COUNT' '=' '1 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_omicronscanfull512" omicronscanfull512_json.sub
02/14/19 15:16:34 From submit: Submitting job(s).
02/14/19 15:16:34 From submit: 1 job(s) submitted to cluster 285442.
02/14/19 15:16:34 	assigned HTCondor ID (285442.0.0)
02/14/19 15:16:34 Just submitted 1 job this cycle...
02/14/19 15:16:34 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:16:34 Of 38 nodes total:
02/14/19 15:16:34  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:16:34   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:16:34    28       0        6       0       0          3        1
02/14/19 15:16:34 0 job proc(s) currently held
02/14/19 15:16:39 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:16:39 Reassigning the id of job test_20190214_narnaud_9_omicronscanfull512_json from (285442.0.0) to (285442.0.0)
02/14/19 15:16:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanfull512_json (285442.0.0) {02/14/19 15:16:34}
02/14/19 15:16:39 Number of idle job procs: 1
02/14/19 15:16:39 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanfull512_json (285442.0.0) {02/14/19 15:16:34}
02/14/19 15:16:39 Number of idle job procs: 0
02/14/19 15:16:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull512_json (285442.0.0) {02/14/19 15:16:34}
02/14/19 15:16:39 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanfull512_json (285442.0.0) {02/14/19 15:16:34}
02/14/19 15:16:39 Number of idle job procs: 0
02/14/19 15:16:39 Node test_20190214_narnaud_9_omicronscanfull512_json job proc (285442.0.0) completed successfully.
02/14/19 15:16:39 Node test_20190214_narnaud_9_omicronscanfull512_json job completed
02/14/19 15:16:39 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:16:39 Of 38 nodes total:
02/14/19 15:16:39  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:16:39   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:16:39    29       0        5       0       0          3        1
02/14/19 15:16:39 0 job proc(s) currently held
02/14/19 15:17:04 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:17:04 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:17:01}
02/14/19 15:22:04 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:22:04 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 15:22:00}
02/14/19 15:22:04 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:22:01}
02/14/19 15:22:25 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:22:25 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:22:20}
02/14/19 15:27:02 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:27:02 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:27:01}
02/14/19 15:27:02 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 15:27:01}
02/14/19 15:27:22 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:27:22 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:27:21}
02/14/19 15:30:23 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:30:23 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:30:19}
02/14/19 15:30:23 Number of idle job procs: 0
02/14/19 15:30:23 Node test_20190214_narnaud_9_bruco_env job proc (285383.0.0) completed successfully.
02/14/19 15:30:23 Node test_20190214_narnaud_9_bruco_env job completed
02/14/19 15:30:23 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:30:21}
02/14/19 15:30:23 BAD EVENT: job (285383.0.0) executing, total end count != 0 (1)
02/14/19 15:30:23 Continuing with DAG in spite of bad event (BAD EVENT: job (285383.0.0) executing, total end count != 0 (1)) because of allow_events setting
02/14/19 15:30:23 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:30:23 Of 38 nodes total:
02/14/19 15:30:23  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:30:23   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:30:23    30       0        4       0       0          3        1
02/14/19 15:30:23 0 job proc(s) currently held
02/14/19 15:30:33 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:30:33 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:30:29}
02/14/19 15:31:08 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:31:08 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:31:05}
02/14/19 15:31:08 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_bruco_env (285383.0.0) {02/14/19 15:31:05}
02/14/19 15:31:08 BAD EVENT: job (285383.0.0) ended, total end count != 1 (2)
02/14/19 15:31:08 Continuing with DAG in spite of bad event (BAD EVENT: job (285383.0.0) ended, total end count != 1 (2)) because of allow_events setting
02/14/19 15:32:03 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:32:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:32:01}
02/14/19 15:32:03 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 15:32:02}
02/14/19 15:34:04 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:34:04 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_bruco_std-prev (285382.0.0) {02/14/19 15:34:02}
02/14/19 15:34:04 Number of idle job procs: 0
02/14/19 15:34:04 Node test_20190214_narnaud_9_bruco_std-prev job proc (285382.0.0) completed successfully.
02/14/19 15:34:04 Node test_20190214_narnaud_9_bruco_std-prev job completed
02/14/19 15:34:04 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:34:04 Of 38 nodes total:
02/14/19 15:34:04  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:34:04   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:34:04    31       0        3       0       0          3        1
02/14/19 15:34:04 0 job proc(s) currently held
02/14/19 15:37:05 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:37:05 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:37:02}
02/14/19 15:42:06 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:42:06 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:42:02}
02/14/19 15:47:07 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:47:07 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:47:02}
02/14/19 15:48:57 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:48:57 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:48:55}
02/14/19 15:48:57 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048 (285350.0.0) {02/14/19 15:48:55}
02/14/19 15:48:57 Number of idle job procs: 0
02/14/19 15:48:57 Node test_20190214_narnaud_9_omicronscanfull2048 job proc (285350.0.0) completed successfully.
02/14/19 15:48:57 Node test_20190214_narnaud_9_omicronscanfull2048 job completed
02/14/19 15:48:57 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:48:57 Of 38 nodes total:
02/14/19 15:48:57  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:48:57   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:48:57    32       0        2       0       1          2        1
02/14/19 15:48:57 0 job proc(s) currently held
02/14/19 15:49:02 Submitting HTCondor Node test_20190214_narnaud_9_omicronscanfull2048_json job(s)...
02/14/19 15:49:02 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 15:49:02 Masking the events recorded in the DAGMAN workflow log
02/14/19 15:49:02 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 15:49:02 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_omicronscanfull2048_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_omicronscanfull2048_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '2 -a FAILED_COUNT' '=' '1 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_omicronscanfull2048" omicronscanfull2048_json.sub
02/14/19 15:49:02 From submit: Submitting job(s).
02/14/19 15:49:02 From submit: 1 job(s) submitted to cluster 285452.
02/14/19 15:49:02 	assigned HTCondor ID (285452.0.0)
02/14/19 15:49:02 Just submitted 1 job this cycle...
02/14/19 15:49:02 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:49:02 Of 38 nodes total:
02/14/19 15:49:02  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:49:02   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:49:02    32       0        3       0       0          2        1
02/14/19 15:49:02 0 job proc(s) currently held
02/14/19 15:49:07 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:49:07 Reassigning the id of job test_20190214_narnaud_9_omicronscanfull2048_json from (285452.0.0) to (285452.0.0)
02/14/19 15:49:07 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048_json (285452.0.0) {02/14/19 15:49:02}
02/14/19 15:49:07 Number of idle job procs: 1
02/14/19 15:49:07 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048_json (285452.0.0) {02/14/19 15:49:03}
02/14/19 15:49:07 Number of idle job procs: 0
02/14/19 15:49:07 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048_json (285452.0.0) {02/14/19 15:49:07}
02/14/19 15:49:07 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_omicronscanfull2048_json (285452.0.0) {02/14/19 15:49:07}
02/14/19 15:49:07 Number of idle job procs: 0
02/14/19 15:49:07 Node test_20190214_narnaud_9_omicronscanfull2048_json job proc (285452.0.0) completed successfully.
02/14/19 15:49:07 Node test_20190214_narnaud_9_omicronscanfull2048_json job completed
02/14/19 15:49:07 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:49:07 Of 38 nodes total:
02/14/19 15:49:07  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:49:07   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:49:07    33       0        2       0       0          2        1
02/14/19 15:49:07 0 job proc(s) currently held
02/14/19 15:49:17 Currently monitoring 1 HTCondor log file(s)
02/14/19 15:49:17 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_scan_logfiles (285360.0.0) {02/14/19 15:49:12}
02/14/19 15:49:17 Number of idle job procs: 0
02/14/19 15:49:17 Node test_20190214_narnaud_9_scan_logfiles job proc (285360.0.0) completed successfully.
02/14/19 15:49:17 Node test_20190214_narnaud_9_scan_logfiles job completed
02/14/19 15:49:17 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 15:49:17 Of 38 nodes total:
02/14/19 15:49:17  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 15:49:17   ===     ===      ===     ===     ===        ===      ===
02/14/19 15:49:17    34       0        1       0       0          2        1
02/14/19 15:49:17 0 job proc(s) currently held
02/14/19 15:59:19 602 seconds since last log event
02/14/19 15:59:19 Pending DAG nodes:
02/14/19 15:59:19   Node test_20190214_narnaud_9_bruco_std, HTCondor ID 285381, status STATUS_SUBMITTED
02/14/19 16:09:21 1204 seconds since last log event
02/14/19 16:09:21 Pending DAG nodes:
02/14/19 16:09:21   Node test_20190214_narnaud_9_bruco_std, HTCondor ID 285381, status STATUS_SUBMITTED
02/14/19 16:19:22 1805 seconds since last log event
02/14/19 16:19:22 Pending DAG nodes:
02/14/19 16:19:22   Node test_20190214_narnaud_9_bruco_std, HTCondor ID 285381, status STATUS_SUBMITTED
02/14/19 16:29:23 2406 seconds since last log event
02/14/19 16:29:23 Pending DAG nodes:
02/14/19 16:29:23   Node test_20190214_narnaud_9_bruco_std, HTCondor ID 285381, status STATUS_SUBMITTED
02/14/19 16:39:23 3006 seconds since last log event
02/14/19 16:39:23 Pending DAG nodes:
02/14/19 16:39:23   Node test_20190214_narnaud_9_bruco_std, HTCondor ID 285381, status STATUS_SUBMITTED
02/14/19 16:49:24 3607 seconds since last log event
02/14/19 16:49:24 Pending DAG nodes:
02/14/19 16:49:24   Node test_20190214_narnaud_9_bruco_std, HTCondor ID 285381, status STATUS_SUBMITTED
02/14/19 16:49:29 Currently monitoring 1 HTCondor log file(s)
02/14/19 16:49:29 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_bruco_std (285381.0.0) {02/14/19 16:49:28}
02/14/19 16:49:29 Number of idle job procs: 0
02/14/19 16:49:29 Node test_20190214_narnaud_9_bruco_std job proc (285381.0.0) completed successfully.
02/14/19 16:49:29 Node test_20190214_narnaud_9_bruco_std job completed
02/14/19 16:49:29 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 16:49:29 Of 38 nodes total:
02/14/19 16:49:29  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 16:49:29   ===     ===      ===     ===     ===        ===      ===
02/14/19 16:49:29    35       0        0       0       1          1        1
02/14/19 16:49:29 0 job proc(s) currently held
02/14/19 16:49:34 Submitting HTCondor Node test_20190214_narnaud_9_bruco_json job(s)...
02/14/19 16:49:34 Adding a DAGMan workflow log /data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log
02/14/19 16:49:34 Masking the events recorded in the DAGMAN workflow log
02/14/19 16:49:34 Mask for workflow log is 0,1,2,4,5,7,9,10,11,12,13,16,17,24,27
02/14/19 16:49:34 submitting: /usr/bin/condor_submit -a dag_node_name' '=' 'test_20190214_narnaud_9_bruco_json -a +DAGManJobId' '=' '285314 -a DAGManJobId' '=' '285314 -batch-name dqr_test_20190214_narnaud_9.dag+285314 -a submit_event_notes' '=' 'DAG' 'Node:' 'test_20190214_narnaud_9_bruco_json -a dagman_log' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag/./dqr_test_20190214_narnaud_9.dag.nodes.log -a +DAGManNodesMask' '=' '"0,1,2,4,5,7,9,10,11,12,13,16,17,24,27" -a initialdir' '=' '/data/procdata/web/dqr/test_20190214_narnaud_9/dag -a DAG_STATUS' '=' '2 -a FAILED_COUNT' '=' '1 -a notification' '=' 'never -a +DAGParentNodeNames' '=' '"test_20190214_narnaud_9_bruco_std,test_20190214_narnaud_9_bruco_std-prev,test_20190214_narnaud_9_bruco_env,test_20190214_narnaud_9_bruco_env-prev" bruco_json.sub
02/14/19 16:49:34 From submit: Submitting job(s).
02/14/19 16:49:34 From submit: 1 job(s) submitted to cluster 285468.
02/14/19 16:49:34 	assigned HTCondor ID (285468.0.0)
02/14/19 16:49:34 Just submitted 1 job this cycle...
02/14/19 16:49:34 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 16:49:34 Of 38 nodes total:
02/14/19 16:49:34  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 16:49:34   ===     ===      ===     ===     ===        ===      ===
02/14/19 16:49:34    35       0        1       0       0          1        1
02/14/19 16:49:34 0 job proc(s) currently held
02/14/19 16:49:39 Currently monitoring 1 HTCondor log file(s)
02/14/19 16:49:39 Reassigning the id of job test_20190214_narnaud_9_bruco_json from (285468.0.0) to (285468.0.0)
02/14/19 16:49:39 Event: ULOG_SUBMIT for HTCondor Node test_20190214_narnaud_9_bruco_json (285468.0.0) {02/14/19 16:49:34}
02/14/19 16:49:39 Number of idle job procs: 1
02/14/19 16:49:39 Event: ULOG_EXECUTE for HTCondor Node test_20190214_narnaud_9_bruco_json (285468.0.0) {02/14/19 16:49:34}
02/14/19 16:49:39 Number of idle job procs: 0
02/14/19 16:49:39 Event: ULOG_IMAGE_SIZE for HTCondor Node test_20190214_narnaud_9_bruco_json (285468.0.0) {02/14/19 16:49:35}
02/14/19 16:49:39 Event: ULOG_JOB_TERMINATED for HTCondor Node test_20190214_narnaud_9_bruco_json (285468.0.0) {02/14/19 16:49:35}
02/14/19 16:49:39 Number of idle job procs: 0
02/14/19 16:49:39 Node test_20190214_narnaud_9_bruco_json job proc (285468.0.0) completed successfully.
02/14/19 16:49:39 Node test_20190214_narnaud_9_bruco_json job completed
02/14/19 16:49:39 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 16:49:39 Of 38 nodes total:
02/14/19 16:49:39  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 16:49:39   ===     ===      ===     ===     ===        ===      ===
02/14/19 16:49:39    36       0        0       0       0          1        1
02/14/19 16:49:39 0 job proc(s) currently held
02/14/19 16:49:39 ERROR: the following job(s) failed:
02/14/19 16:49:39 ---------------------- Job ----------------------
02/14/19 16:49:39       Node Name: test_20190214_narnaud_9_omicronplot_exe
02/14/19 16:49:39            Noop: false
02/14/19 16:49:39          NodeID: 19
02/14/19 16:49:39     Node Status: STATUS_ERROR    
02/14/19 16:49:39 Node return val: 2
02/14/19 16:49:39           Error: Job proc (285394.0.0) failed with status 2
02/14/19 16:49:39 Job Submit File: omicronplot_exe.sub
02/14/19 16:49:39  HTCondor Job ID: (285394.0.0)
02/14/19 16:49:39       Q_PARENTS: test_20190214_narnaud_9_omicronplot, <END>
02/14/19 16:49:39       Q_WAITING: <END>
02/14/19 16:49:39      Q_CHILDREN: test_20190214_narnaud_9_omicronplot_json, <END>
02/14/19 16:49:39 ---------------------------------------	<END>
02/14/19 16:49:39 Aborting DAG...
02/14/19 16:49:39 Writing Rescue DAG to dqr_test_20190214_narnaud_9.dag.rescue001...
02/14/19 16:49:39 Removing submitted jobs...
02/14/19 16:49:39 Removing any/all submitted HTCondor jobs...
02/14/19 16:49:39 Running: /usr/bin/condor_rm -const DAGManJobId' '=?=' '285314
02/14/19 16:49:39 Note: 0 total job deferrals because of -MaxJobs limit (0)
02/14/19 16:49:39 Note: 0 total job deferrals because of -MaxIdle limit (1000)
02/14/19 16:49:39 Note: 0 total job deferrals because of node category throttles
02/14/19 16:49:39 Note: 0 total PRE script deferrals because of -MaxPre limit (20) or DEFER
02/14/19 16:49:39 Note: 0 total POST script deferrals because of -MaxPost limit (20) or DEFER
02/14/19 16:49:39 DAG status: 2 (DAG_STATUS_NODE_FAILED)
02/14/19 16:49:39 Of 38 nodes total:
02/14/19 16:49:39  Done     Pre   Queued    Post   Ready   Un-Ready   Failed
02/14/19 16:49:39   ===     ===      ===     ===     ===        ===      ===
02/14/19 16:49:39    36       0        0       0       0          1        1
02/14/19 16:49:39 0 job proc(s) currently held
02/14/19 16:49:39 Wrote metrics file dqr_test_20190214_narnaud_9.dag.metrics.
02/14/19 16:49:39 Metrics not sent because of PEGASUS_METRICS or CONDOR_DEVELOPERS setting.
02/14/19 16:49:39 **** condor_scheduniv_exec.285314.0 (condor_DAGMAN) pid 2220820 EXITING WITH STATUS 1