[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] parallel jobs on a single machine with double core



Hi Diana,

You can use the ParallelSchedulingGroup tag to stop your job from fragmenting onto different machines, e.g. the following in an execute machine's condor_config.local file should achiueve this:

ParallelSchedulingGroup     = "$(HOSTNAME)"
DedicatedScheduler          = "DedicatedScheduler@xxxxxxxxxxxxxxxxxxxxxxxxxx"
STARTD_EXPRS = $(STARTD_EXPRS), DedicatedScheduler, ParallelSchedulingGroup RANK = Scheduler =?= $(DedicatedScheduler)


Fuller documentation and an example using MPICH2 for our Condor environment can be found at http://www.escience.cam.ac.uk/projects/camgrid/mpi.html.

Cheers,
Mark

Diana Lousa wrote:
Hello
I would like to know if it is possible - when running a parallel job,
using condor parallel universe - to define that you want the job to run
on a single machine (with multiple processors). To be more specific, I
have a cluster with several multiprocessors and I want my job to be
split by the processors of a single machine.

Thanks in advance

Diana Lousa
PhD Student
Instituto de Tecnologia Química e Biológica (ITQB)
Universidade Nova de Lisboa
Avenida da República, EAN
Apartado 127
2780-901 Oeiras
Portugal
_______________________________________________
Condor-users mailing list
To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/condor-users

The archives can be found at either
https://lists.cs.wisc.edu/archive/condor-users/
http://www.opencondor.org/spaces/viewmailarchive.action?key=CONDOR