[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] Do "slave" machines in a Condor pool need 80MB of sw?
- Date: Wed, 22 Feb 2006 09:47:14 -0600
- From: Matt England <mengland@xxxxxxxxxxxx>
- Subject: Re: [Condor-users] Do "slave" machines in a Condor pool need 80MB of sw?
At 2/22/2006 07:30 AM, David McBride wrote:
I cheat and run the binaries from an NFS volume instead.
Yes, that's how I may use stuff for my internal machines. However, I'm
hoping to use Condor pools that span large geographical parts of the
Internet, certainly spanning many subnets.
Does this present any potential problem (so long as I can make sure all the
machines are not hidden behind firewalls)?
At 2/21/2006 07:46 PM, Nick LeRoy wrote:
What define as a "slave" machine? An execute-only host? A submit-only
A submit + execute host?
I don't yet know. I admit I'm still quite ignorant about Condor at the moment.
Hmm, thinking about this a bit further. We could split it into multiple
packages like "condor-core" "condor-execute" "condor-execute_standard"
"condor-execute_vanilla" "condor-submit" "condor-submit_standard"
"condor-submit_vanilla" "condor-central_manager", "condor-compile"... As you
can see, this could rapidly become quite a maze of packages.
Yes, quite. I'm curious if many of these sub-packages can be coalesced
into a smaller number, thus making it a bit more simple (eg, my "master"
could be a combination of central_manager/submit*/etc, and the "slaves"
could be "execute/compiles"). Alas, I'm still a green Condor newbie, so I
may not be making any sense.
My main question is answered: there's one big package. And that's
fine. Thanks for the info. :)