[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Condor-users] [condor-users] Iterative Computations [was: Does stork avoid retransfering data?]



Gabriel -

Makeflow does not currently do that; it is just a static DAG.

However, this is an important workflow pattern that we have been thinking about,
and we could work with you to get something going in a way that is
Condor-compatible.

Can you share some more detailed use cases?
Is there anyone else on the list interested in such a capability?

Doug


On Thu, Mar 1, 2012 at 11:33 AM, Gabriel Mateescu
<gabriel.mateescu@xxxxxxxxx> wrote:
> Hi Doug,
>
> Is it possible to express with Makeflow
> iterative computations that repeat a set
> of steps until some stopping criterion
> is met?
>
> Thanks.
> Gabriel
>
>
>
>
> On Tue, Jan 11, 2011 at 5:55 PM, Douglas Thain <dthain@xxxxxx> wrote:
>> Thomas -
>>
>> You might consider using Makeflow for this task:
>> http://www.nd.edu/~ccl/software/makeflow
>>
>> The idea is that you express your tasks in Makeflow, submit a bunch of
>> 'worker' processes to Condor, and Makeflow will distribute tasks among
>> the workers.  If they have some common executables and input files,
>> they will be automatically cached at the workers, so you don't have to
>> keep transmitting them.
>>
>> Cheers,
>> Doug
>>
>> On Tue, Jan 11, 2011 at 8:27 AM, Rowe, Thomas <rowet@xxxxxxxxxx> wrote:
>>> I have to run a simulation about a thousand times with different seeds.  The
>>> simulation executable and data total about 100MB.  This sounds like a job
>>> for DAGMan & Stork, because this 100MB collection of files needs to get
>>> copied around reliably, and some large output files need to be transferred
>>> back to the originating machine reliably.
>>>
>>>
>>>
>>> My question: Does Stork and/or DAGMan do anything intelligent about avoiding
>>> recopying files?  The input files are identical for all thousand runs; only
>>> the seed varies.  But I would like to have Condor manage each run
>>> individually.  So does all the data and the executable get copied around a
>>> thousand times, cleaned up after each run?  If the thousand reps are child
>>> to the Stork job that transfers files in place, does everything just work
>>> with no extraneous recopying of input data?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Thomas Rowe
>>>
>>> _______________________________________________
>>> Condor-users mailing list
>>> To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
>>> subject: Unsubscribe
>>> You can also unsubscribe by visiting
>>> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>>>
>>> The archives can be found at:
>>> https://lists.cs.wisc.edu/archive/condor-users/
>>>
>>>
>> _______________________________________________
>> Condor-users mailing list
>> To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
>> subject: Unsubscribe
>> You can also unsubscribe by visiting
>> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>>
>> The archives can be found at:
>> https://lists.cs.wisc.edu/archive/condor-users/
> _______________________________________________
> Condor-users mailing list
> To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
> subject: Unsubscribe
> You can also unsubscribe by visiting
> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>
> The archives can be found at:
> https://lists.cs.wisc.edu/archive/condor-users/