[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] vanilla universe jobs and condor_shadow
- Date: Thu, 3 Mar 2005 13:40:51 -0800
- From: Brian Gyss <brian.gyss@xxxxxxxxx>
- Subject: Re: [Condor-users] vanilla universe jobs and condor_shadow
If all the shadow process does is signal completion in vanilla
universe, is there an alternate way to accomplish this without
spawning several large shadow processes on the submitting machine?
On Thu, 3 Mar 2005 10:03:15 +0000, Matt Hope <matthew.hope@xxxxxxxxx> wrote:
> On Wed, 2 Mar 2005 20:32:56 -0800, Brian Gyss <brian.gyss@xxxxxxxxx> wrote:
> > Quick question regarding vanilla universe jobs and condor_shadow.
> > When a vanilla universe job runs, is the shadow process on the
> > submitting machine the one that's performing all the I/O for the
> > running job?
> > For example, we have an application running on our cluster that's
> > producing 12 megabyte images. Are vanilla universe jobs sending all
> > the image data over the network for the submitting machine to write
> > out? Also, is the submitting machine reading in the image data that
> > the job running on the cluster node requests? If so, is there any way
> > to override this behavior so that all I/O is restricted to the cluster
> > node?
> In vanilla universe the only io to the submitting node is on
> completion or inferred checkpoint.
> At this point the contents of the working directory are copied back...
> You could be using streaming IO of course which would stream back the
> stdin and stdout on the fly but not (I believe) any other files