[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] setting up dedicated pool for parallel universe



Dear Greg and Ivo,

Dynamic partitioning seems to do the job. Thank a lot for the help!

Best regards,

Kodanda




On Fri, 28 Dec 2018 at 22:12, Greg Thain <gthain@xxxxxxxxxxx> wrote:
On 12/28/18 10:21 AM, Kodanda Ram Mangipudi wrote:
> Dear Ivo,
>
> Thanks for writing. I started with simple condor and vanilla univ. I
> have used the attached job submission script and the shell script
> calling an MPI program vasp.
> The job runs but uses only one slot. What is the right way to run this?


I think you are on the right track. If you want your MPI job to run on
multiple cores all on the same machine, you want it to run in a single
slot that has multiple cores. To do this, you'll want to add a

Request_cpus = 12

to your submit file if you want 12 cores, and configure your start to
allow multiple cpu slots. To do this with partitionable slots, add the
following to you worker node config:

NUM_SLOTS = 1
NUM_SLOTS_TYPE_1 = 1
SLOT_TYPE_1 = cpus=100%
SLOT_TYPE_1_PARTITIONABLE = true


And then your condor job will be provisioned with 12 cores.


-greg

_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/