[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] change to condor_submit - user feedback desired! (was Re: multiple condor_submit's - one cluster)



On Wed, 11 Feb 2015, Dimitri Maziuk wrote:

Sounds like you might have missed the point:

job myjob1 myjob.sub
vars myjob1 outfile=myjob1.out stdout=myjob1.stdout
job myjob2 myjob.sub
vars myjob2 outfile=myjob2.out stdout=myjob2.stdout

is the condor_submit_dag file format. When I need to run a batch and
give the output files sensible names -- which is what the original
complaint was IIRC -- I submit is as a DAG specifically because of that.
There may be downsides to it, but the basic functionality already
exists: you can specify command line arguments, filenames, whatever. It
just takes a second submit file and the _sumbit_dag rather than plain
_submit.

Yeah, that makes sense. I was just saying that's not what I had in mind; I was thinking more along the lines of Todd's 2nd proposal, to queue based on each line of input (from stdin or a file) and populate $(input_line) rather than a queue N and $(process).

I get why you might want to do it in a dag file format as you described, just it seemed like you could accomplish the same thing with

	queue for jobname,outfile,stdout in listfile.txt

and listfile.txt containing:

	myjob1 myjob1.out myjob1.stdout
	myjob2 myjob2.out myjob2.stdout
	...

which strikes me as easier both to generate and to parse.

And then the thread diverged into the interesting ideas like "I want a globbing syntax that would do the right thing for users who can't glob" and "condor submit doesn't scale, that's bad unless it's a dag in which case it's ok" etc...

Yeah  :(

(Sorry for the noise!)

Carl