[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] condor_submit feature request



Steffen,

> Of course, this could be worked around using a "dummy" executable
> for the first n jobs of a cluster:
> 	Executable = /bin/false
> 	Queue n
> then defining the real one
> 	Executable = $the_real_one
> 	Queue m
> but it imposes unneccessary load on the scheduler, and those very
> short running jobs in the past had the tendency to confuse their
> shadow processes.

This won't work because once you change the executable name you will
start a new cluster beginning again with process number 0.

> If there was a "StartAt" parameter for the submit file (which would
> default to 0 for the first Queue statement, and be counted up
> subsequently), that would make things a lot easier (also to repeat a
> selected set of failed jobs).

This doesn't answer your question, but for the case where I'm breaking
things up into clusters of 1000 jobs, I do the following:

Subset		= 000

# XXX000 - XXX009
Arguments	= ... $(Subset)00$(Process) ...
Queue 10

# XXX010 - XXX099
Arguments	= ... $(Subset)0$(Process) ...
Queue 90

# XXX100 - XXX999
Arguments	= ... $(Subset)$(Process) ...
Queue 900


And then I change "Subset" between each submission of 1000 jobs.

-- 
Daniel K. Forrest	Laboratory for Molecular and
forrest@xxxxxxxxxxxxx	Computational Genomics
(608) 262 - 9479	University of Wisconsin, Madison