[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] fetchwork plus partionable slots



On 04/14/2011 05:39 AM, Carsten Aulbert wrote:
Hi

and another thread for a newly encountered problem.

We have compute nodes with 4 cores and using partionable slots on these. We
have fetchwork (with boinc) running on these, however, as there only exists
slot1@host, we only get one instance of boinc running there. It does work
nicely if I statically declare the slots, but that would clash with users'
jobs.

Is there a way to tell Condor while starting a fetchwork job, that it should
NOT run under the "real" slot1, but rather under a virtually created one, i.e.
slot1_1? The fetchwork script output request_cpu=1, request_memory=400 and
request_disk=100, but that's only sufficient to let condor launch the job.

Any thoughts how this can be accomplished?

Cheers

Carsten

Carsten,

Partitionable slots cannot be partitioned via fetchwork. IIRC, it's not a technically challenging thing to add.

Also, if it were possible you'd have to emit RequestCpus/RequestDisk/RequestMemory instead of request_*.

Best,


matt