[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Macro substitutions in submit file - filenames?



Thanks Kent and Ben for the explanations and info.

The reason I'm looking at this is some testing I'm doing to make it
easier for our users to make use of our multiple submit VMs.

Please excuse the blabbing to follow. :)

Instead of them having to remote desktop into each VM and submit jobs
from each one (we need this in windows due to the limited number of
concurrently running jobs that windows can handle - our win2008 server
VMs can each handle 2000 running jobs at once, and as our pool/s have
over 10,000 slots, multiple submit nodes are needed to fully utilize
all the resources), I'm testing ways for them to submit directly from
their own desktops.

I've tried using the grid universe with the submit VMs as the grid resource
and use the RANDOM_CHOICE macro so that the jobs get spread across them all.
This works well, and the user still sees all their jobs just using condor_q
on their own desktop. However, it is extremely slow at transferring the
jobs and never is able to utilize all the resources.

Of course they could use condor_submit -remote but are then still dealing with
having to submit to different VMs manually and then check each of the
VM queues manually.

At the moment I'm fiddling with a Visual Studio (VB) console app that would be
called something like condor_submit_vms that would take their single
submit file and spread the load/jobs across all VMs. To handle submit files
that use $(Process) I need to be able to use the $$([$(Process)+5]) type
of macro. The app will find say a "queue 15000" statement and generate
new submit files for each VM, dividing up the 15000 jobs between them
(e.g. 5 submit VMs means "queue 3000" for each new submit file), and
replacing any $(Process) with $$([$(Process)+N]) where in this example
N will be 0,3000,6000,9000,12000 for the different submit files.
The app then will condor_submit -remote using these new submit files
for each VM. It will then need to monitor and retrieve data with
condor_transfer_data as jobs reach status C, and then delete them.
This will all happen transparently behind the scenes as far as the user
is concerned. A similar app for viewing the queue of all jobs on all
VMs would be needed, e.g. condor_q_vms
I may also look into file renaming after they have been retrieved.

If anyone has a suggestion or a way of doing what I want to achieve
please stop me re-inventing the wheel!

Thanks

Cheers

Greg

-----Original Message-----
From: HTCondor-users [mailto:htcondor-users-bounces@xxxxxxxxxxx] On Behalf Of Ben Cotton
Sent: Wednesday, 17 September 2014 11:42 PM
To: HTCondor-Users Mail List
Subject: Re: [HTCondor-users] Macro substitutions in submit file - filenames?

On Wed, Sep 17, 2014 at 10:56 AM, R. Kent Wenger <wenger@xxxxxxxxxxx> wrote:
> If you don't need the +5, this will work, of course:
>
>   log = cpubound_$(Cluster)_$(Process).log

If you do need the +5, you could add the following to your submit file:

  noop_job = True
  queue 5
  noop_job = False
  queue N

If the $(Process) is used somewhere your submit script apart from the
log, you should (based on my understanding of Kent's explanation) be
able to do something like:

  Arguments = $$([$(Process)-5])

(My trivial test suggests this does work)


Thanks,
BC

-- 
Ben Cotton
main: 888.292.5320

Cycle Computing
Leader in Utility HPC Software

http://www.cyclecomputing.com
twitter: @cyclecomputing
_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/