[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[HTCondor-users] Bosco question regarding multicore jobs


I'm trying to submit a job via bosco with

request_cpus = N

in the submit file. For some reason, the job in the remote submit host (condor batch system) uses 1 for RequestCpus always, ignoring my request.

I see some patch regarding multicore here, but I don't know what is in charge of sending that 'mpinodes' parameters to blah. Is that the remote gahp / batch_gahp?


If I use request_memory, that works fine. Digging into the parameters passed to bosco/condor/glite/bin/condor_submit.sh, I can see "-n" is never parsed [1].

What do I need to do to properly submit multicore jobs?
I'm using condor 8.6.9 on the bosco side.

-c /home/khurtado/.condor/bosco/sandbox/58fb/58fbd746/apf-test.virtualclusters.org_11000_apf-test.virtualclusters.org#55038.0#1525460834/condor_exec.exe -T /tmp -O /tmp/OutputFileList_2924_1525460844647080 -i /dev/null -o _condor_stdout -e _condor_stderr -w /home/khurtado/.condor/bosco/sandbox/58fb/58fbd746/apf-test.virtualclusters.org_11000_apf-test.virtualclusters.org#55038.0#1525460834 -D home_bl_apf-test.virtualclusters.org_11000_apf-test.virtualclusters.org#55038.0#1525460834 -m 4000 -V "FACTORYUSER=autopyfactory"