[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Dedicated resources and MPICH 1.2.4 on Windows






Thanks for the response Erik.  condor_status returns the machines that I've
configured with the DedicatedScheduler config entry.
On my dedicated execute machines I am using the example in etc for blended
operation.  Everything works as expected with mpirun.
My dedicated execute nodes have the following attributes in
condor_config.local:

DedicatedScheduler = "DedicatedScheduler@xxxxxxxxxxxxxxx"
START       = True
SUSPEND     = False
CONTINUE    = True
PREEMPT     = False
KILL        = False
WANT_SUSPEND      = False
WANT_VACATE = False
RANK        = Scheduler =?= $(DedicatedScheduler)

My dedicated scheduler is configured as follows:

DedicatedScheduler = "DedicatedScheduler@xxxxxxxxxxxxxxx"
STARTD_EXPRS = $(STARTD_EXPRS), DedicatedScheduler


I am submitting a 'helloworld' MPI app with the following submit file:

universe = MPI
executable = hello.exe
log = log.txt
machine_count = 4
should_transfer_files = yes
when_to_transfer_output = on_exit
queue

Is there anything I need to do to differently for condor_submit?

Thanks,
Bob Nordlund



|---------+-------------------------------->
|         |           Erik Paulson         |
|         |           <epaulson@xxxxxxxxxxx|
|         |           >                    |
|         |           Sent by:             |
|         |           condor-users-bounces@|
|         |           cs.wisc.edu          |
|         |                                |
|         |                                |
|         |           11/02/2004 02:46 PM  |
|         |           Please respond to    |
|         |           Condor-Users Mail    |
|         |           List                 |
|         |                                |
|---------+-------------------------------->
  >--------------------------------------------------------------------------------------------------------------|
  |                                                                                                              |
  |       To:       Condor-Users Mail List <condor-users@xxxxxxxxxxx>                                            |
  |       cc:                                                                                                    |
  |       Subject:  Re: [Condor-users] Dedicated resources and MPICH 1.2.4 on Windows                            |
  >--------------------------------------------------------------------------------------------------------------|




On Tue, Nov 02, 2004 at 02:31:14PM -0500, Robert.Nordlund@xxxxxxxxxxxxxxxx
wrote:
>
>
>
>
> Hello all,
>
> I have a dedicated cluster of machines (START = True) and when I submit
an
> MPI universe job I find
>
> Found 0 potential dedicated resources
>
> in SchedLog.  I have a dedicated scheduler per the manual and a Linux
> central manager.  Is there another setting that tells condor that the
> machines are dedicated besides the START macro?
>

You've got
DedicatedScheduler = "DedicatedScheduler@xxxxxxxxxxxxxx"
STARTD_EXPRS = $(STARTD_EXPRS), DedicatedScheduler

in your execute machines, right? And you did a condor_reconfig on the
execute machines (and it succeeded?) That's all you should have to
configure.

What does

condor_status -const 'DedicatedScheduler ==
"DedicatedScheduler@xxxxxxxxxxxxxx"'

return?

-Erik
_______________________________________________
Condor-users mailing list
Condor-users@xxxxxxxxxxx
http://lists.cs.wisc.edu/mailman/listinfo/condor-users





*************************************************************************
PRIVILEGED AND CONFIDENTIAL: This communication, including attachments, is
for the exclusive use of addressee and may contain proprietary,
confidential and/or privileged information.  If you are not the intended
recipient, any use, copying, disclosure, dissemination or distribution is
strictly prohibited.  If you are not the intended recipient, please notify
the sender immediately by return e-mail, delete this communication and
destroy all copies.
*************************************************************************