[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Condor-users] How to handle shared libraries with job submission



Hi!

I have a program which is made up of an executable and 3 libraries.
What is the best way to handle libraries? I am running many jobs if
that should matter.

Should I copy them to every node in the grid? This would be kinda
tricky to administer if those libraries are updated (as they will as
the program is continiously under developement), but maybe the fastest
for the execution?

Could I have them on a NFS share?

Can I just pass them on as "transfer_input_files" in the submit
script? The libraries are about 1.4MB all together so it would do a
lot of transferring.

I am also looking for some way to send a collection of small
jobs/executions as a single larger job to a node to enhance the
scheduling vs execution time ratio. Each single computation is <30
sec, but there can be many of them. Is there a way to batch up a
collection of these executions and send this batch as a single job so
the node will compute more than it is scheduling and transferring
files? Maybe just use a batch script as executable? How does that
work? Do I need to have the executable as an input file? Can the
executable be on a NFS mount along with the libraries?

- Atle