[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Condor-users] Question about $(Process)
- Date: Mon, 4 Jul 2005 17:43:39 +0100
- From: Matt Hope <matthew.hope@xxxxxxxxx>
- Subject: Re: [Condor-users] Question about $(Process)
On 7/4/05, Miguel Dilaj <mdilaj@xxxxxxxxxxxxx> wrote:
> The only reason I've RIGHT NOW for wanting to do series of tables instead of
> the whole run is a bit of a stupid one: the lack of a disk big enough (in
> the Condor server) to host 250 tables... I've plenty of space in the final
> box, but the server is using an external USB drive at the moment.
> Another possibility would be to briefly HOLD the jobs running, disconnect
> the USB drive, move the files to the final box, reconnect it, and remove the
> hold status. If this is not going to break running jobs it could be a simple
> solution for me.
Placing a running job on hold will cause it to lose all state and
restart from the beginning. (assumption vanilla without checkpointing)
Placing an idle job on hold will cause no harm at all
Solution - submit all jobs (queue as many as you need in total) but
use the hold option.
1) Release as many as will fit on the disk (a -constraint to
condor_release works well here and is eminently scriptable)
2) wait for them to finish
3) sneaker net the files to their final location then replace disk
If not finished goto 1
this will be sub optimal in that you will not maximize throughput of
your farm (since you wait for it to fully empty, two disks with a job
switch indicating which one to use would remove that for some
additional complexity in submission.