[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] setting up dedicated pool for parallel universe



Dear Ivo,

Thanks for writing. I started with simple condor and vanilla univ. I have used the attached job submission script and the shell script calling an MPI program vasp.
The job runs but uses only one slot. What is the right way to run this?
Thanks in advance.

Best regards,

Kodanda


File:Job.con
------------------------------------------------------------------------------
Âuniverse = vanilla
ÂExecutable ÂÂ= vasp_submit.sh
Âarguments ÂÂÂ=
ÂLog ÂÂÂÂÂÂÂÂÂ= test.log
ÂOutput ÂÂÂÂÂÂ= test.out
Âerror ÂÂÂÂÂÂÂ= test.err
Âgetenv=True
Âshould_transfer_files=YES
Âtransfer_input_files=INCAR, POSCAR, POTCAR, KPOINTS
Âtransfer_executable=True
Âwhen_to_transfer_output=ON_EXIT_OR_EVICT
ÂQueue
---------------------------------------------------

File: vasp_submit.sh
----------------------------------
#!/bin/bash
ulimit -s unlimited
which vasp
which mpirun
mpirun -np 12 vasp
---------------------------




On Fri, 28 Dec 2018 at 21:23, Ivo <ivo.cavalcante@xxxxxxxxx> wrote:


Em sex, 28 de dez de 2018 12:52, Kodanda Ram Mangipudi <kodanda@xxxxxxxxx> escreveu:
Besides, as you have rightly guessed, what we want is being able to run MPI and openMP jobs on multicores on a single machines. I just could not get the proper configuration and job submit script so far, therefore ended up trying dedicated scheduler. What is the best way to achieve our goal. We are not imagining MPI jobs across two machines, multiple cores on one machine is enough for us.

For what you want, Greg is right: vanilla universe is the way to go.Â


Ivo Cavalcante
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/