[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Condor in Parallel Applications



On 06/27/2011 03:45 PM, Sebastian Gomez wrote:
Hi All,

Currently, I am trying to run a program against a really large
database. My plan was to fragment the database into n small chunks
and then submit a job to condor to have n nodes, each of which will
process a chunk of the database. The problem is that I'm not entirely
sure how to do this. I have looked over the manual and it is not
clear to me how to configure the submit file so that each of the
instances of my program takes a different argument (specifying which
chunk of the db to process -- assuming the existence of a shared
filesystem). Alternatively, I imagine one could use condor's file
transfer mechanism to copy different chunks of the database to each
node and simply run the program on that chunk. Once again, not
entirely sure how to go about doing this.

Can anybody point me in the right direction?

Thanks,
Sebastian Angel

http://spinningmatt.wordpress.com/2011/07/04/getting-started-submitting-jobs-to-condor/

I would suggest giving your processors different arguments, maybe using $(Process) to help differentiate if your fragments are so named.

Best,


matt