[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Sharing data across nodes



Something like GlusterFS maybe? (with NFS-Ganesha?)
https://www.gluster.org/
Never tried it myself though.
Not sure if you would want to replace /home with this new storage.
Maybe have condor use it as the default spool directory?

As Dima said, you'd still need a low latency fabric between the node servers if you want good storage I/O.

Martin

-----Original Message-----
From: HTCondor-users <htcondor-users-bounces@xxxxxxxxxxx> On Behalf Of dmitri maziuk
Sent: May 17, 2022 4:03 PM
To: htcondor-users@xxxxxxxxxxx
Subject: Re: [HTCondor-users] Sharing data across nodes

On 2022-05-17 2:00 PM, Michael Thomas wrote:
> Hi Krishna,
> 
> A distributed filesystem such as ceph (www.ceph.io) or hdfs
> (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs
> /HdfsDesign.html)
> may provide what you need.
> 

I played with ceph long time ago when it was still young. At the time there was no options to "pin" a client to data node and no plan to make it available to admins (IIRC there was something you could do at compile time).

Which means you could set up local ceph storage on a condor node, but couldn't make that condor node do all its i/o on its local ceph storage. 
Ceph would spread the i/o over all its storage nodes acording to its clever algorithm; it was be better than all condor hosts hitting a single storage server (nfs that we wanted to replace), but to get any i/o performance you'd need 10Gb fabric.

That may have changed since.

Dima
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/