[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] (no subject)

On Mon, May 2, 2016 at 1:22 PM, Michael V Pelletier
<Michael.V.Pelletier@xxxxxxxxxxxx> wrote:


> Since I only have about twenty minutes in my time slot, I'd be delighted if someone who has thought through this issue could offer a pithy, memorable, and succinct way to express this idea to a potentially skeptical audience. Or a link to one.

Jason Stowe recently gave a talk at the HPC User Forum in Tuscon
talking about Maseratis and school buses:

There's a lot more in that half-hour video, but the general idea is in
line with what you said: "for an absurdly modest price, you can
harness the power of tens or hundreds of thousands of CPU cores for
only as long as you need it". When I talk to customers, I generally
focus on reducing time-to-results, which is really what they're after.
With throughput workloads you can scale much larger than HPC workloads
(i.e. an MPI job that runs in 4 hours at 256 cores will probably not
run in 1 hour at 1024 cores. There's a point with tightly-coupled
applications where adding more cores is a mythical man-month
proposition). And if you realize halfway through that there's
additional work you need to do to get your final results, you can
(assuming a cloud environment) just throw more cores at it right away
instead of waiting for the previously-submitted to finish.

If you're looking for real-world examples: we have customers running
throughput workloads in a variety of fields: manufacturing, finance,
life sciences, image processing. Jason's talk above goes into some
detail on those.

I'm not sure if any of the above is sufficiently pithy, but I hope it helps.


Ben Cotton

Cycle Computing
Better Answers. Faster.

twitter: @cyclecomputing