[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Most of the time in Condor jobs gets wasted in I/o



On 04/24/2013 03:01 PM, Dr. Harinder Singh Bawa wrote:

> All 20k files are on /rdata2 dir. When I submit 120 jobs on 120 nodes, Each
> job which is now getting 200 files take input from /rdata2 dir.(parallely).
> So each job needs approx 16TB/120= 500GB of input from /rdata2.

There's more to it, e.g. exactly how they're reading the input, but in
general if you're trying to read 120x500GB over NFS in parallel, expect
it to be slow.

Try condor's file transfer and manual copying of the input files to
worker hosts (e.g. to /var/tmp), see what works best.

> PS: BTW, I am not able to run the following command:
> "iostat -dx 10 300"
> 
> it says iostat command not found. Is this some OS specific? I am using
> linux .

It probably isn't installed. On redhat and derivatives it's in 'sysstat'
package.

HTH
-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

Attachment: signature.asc
Description: OpenPGP digital signature