[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] howto set ulimit -l unlimited for openmpiscript or other condor jobs on debian 9 for htcondor 8.8?
- Date: Fri, 29 Mar 2019 21:53:18 +0000
- From: Tim Theisen <tim@xxxxxxxxxxx>
- Subject: Re: [HTCondor-users] howto set ulimit -l unlimited for openmpiscript or other condor jobs on debian 9 for htcondor 8.8?
Sorry, that was my fault. I used 'infinity' rather than 'unlimited' in
my test VM. I couldn't cut and paste easily, so I just typed it wrong.
On 3/29/19 4:14 PM, Harald van Pee wrote:
> Hello Tim,
> thank you for your good explanation in principle this works:
> condor.service - Condor Distributed High-Throughput-Computing
> Loaded: loaded (/lib/systemd/system/condor.service; enabled; vendor preset:
> Drop-In: /etc/systemd/system/condor.service.d
> results still in
> max locked memory (kbytes, -l) 64
> therefore I tried numbers and found out
> results in
> max locked memory (kbytes, -l) 32
> results in
> max locked memory (kbytes, -l) 97656
> results in
> max locked memory (kbytes, -l) 9765624
> obviously something goes wrong but at least it seems high numbers can be used
> as a work around.
> Now the question is is the problem in htcondor or systemd or the debian
> kernel? Or do I still something wrong?
> On Friday, March 29, 2019 3:49:21 PM CET Tim Theisen wrote:
>> Hello Harald,
>> The /etc/init.d/condor file is not used. Distributing it was a mistake
>> on our part.
>> So, we should modify the systemd configuration.
>> When updating the condor service, one should not modify the system
>> installed service file. Instead, create a file with overrides to the
>> distributed configuration. Here are the steps:
>> 1. mkdir /etc/systemd/system/condor.service.d 2. Put the following 2 lines
>> 3. Force a systemd reload: systemctl daemon-reload
>> 4. Finally, restart HTCondor: systemctl restart condor
>> On 3/28/19 3:56 PM, Harald van Pee wrote:
>>> Dear htcondor experts,
>>> we running htcondor 8.8.1 on debian 9 with vanilla universe jobs without
>>> problems and now want to start openmpi jobs in parallel universe, but
>>> to do so
>>> we need to set max locked memory to a high value
>>> I have set
>>> ulimit -l unlimited
>>> the openmpiscript itself.
>>> And added to
>>> then I have down a
>>> systemctl daemon-reload
>>> systemctl restart condor
>>> on all condor hosts.
>>> But after starting openmpiscript
>>> ulimit -l
>>> inside of the script shows
>>> max locked memory (kbytes, -l) 64
>>> And as expected the job does not run properly.
>>> What I have done wrong? How I have to set the
>>> max locked memory
>>> limit for condor jobs/scripts?
>>> Best regards
>>> HTCondor-users mailing list
>>> To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with
> subject: Unsubscribe
>>> You can also unsubscribe by visiting
>>> The archives can be found at:
>> Tim Theisen
>> Release Manager
>> HTCondor & Open Science Grid
>> Center for High Throughput Computing
>> Department of Computer Sciences
>> University of Wisconsin - Madison
>> 4261 Computer Sciences and Statistics
>> 1210 W Dayton St
>> Madison, WI 53706-1685
>> +1 608 265 5736
HTCondor & Open Science Grid
Center for High Throughput Computing
Department of Computer Sciences
University of Wisconsin - Madison
4261 Computer Sciences and Statistics
1210 W Dayton St
Madison, WI 53706-1685
+1 608 265 5736