[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] [External] Re: Fractional GPU



In a cluster used for the early development of the AN/SPY-6(V) radar system, the simulation software implemented its own internal lock-and-queue mechanism to share the GPU cards on the system, as it was back before HTCondor was able to provide GPU resource allocations. The CUDA_VISIBLE_DEVICES was unset, so each job had access to all GPUs, but since they didnât need 100% of a card, a queue was established in shared memory among all the running jobs, which managed time slots on one or the other of the M60 GPU cards.

 

At one point I backported the GPU advertising pieces from a later HTCondor version while we were still working on testing and approvals for the HTCondor upgrade, but it turned out that assigning one job per card, or even multiple jobs per card by tweaking the ClassAd, left performance on the table as compared to the jobâs internal allocation mechanism across all available GPUs.

 

I reckon with the astounding performance capabilities of the H100, this will become more of an issue for folks. The cost of the card makes each cycle left idle that much more costly, and I suspect it can be very challenging to write code capable of pushing such a card to 100% utilization without the right kind of problem to solve.

 

Michael Pelletier

Principal Technologist

High Performance Computing

Classified Infrastructure Services

 

C: +1 339.293.9149
michael.v.pelletier@xxxxxxx

 

From: HTCondor-users <htcondor-users-bounces@xxxxxxxxxxx> On Behalf Of Vikrant Aggarwal
Sent: Tuesday, February 27, 2024 6:51 AM
To: HTCondor-Users Mail List <htcondor-users@xxxxxxxxxxx>
Subject: [External] Re: [HTCondor-users] Fractional GPU

 

Hello Benedikt and David,

 

Your comments are interesting. 

 

I have some queries on your responses: 

 

For Benedikt:

 

Along with H100 which are supporting MIG

 

We have recently introduced L40S which doesn't support MIG but has vGPU capabilities. I didn't get the chance to try out vGPU for your experience. Is condor_gpu_discovery able to detect each vGPUs as a separate GPU? 

 

For David:

 

When you are saying we should control the number of processes, are we talking about Multi-Process Service (MPS) described here https://docs.nvidia.com/deploy/mps/index.html

 


Thanks & Regards,

Vikrant Aggarwal

 

 

 

On Fri, Feb 23, 2024 at 10:05âPM Dudu Handelman <duduhandelman@xxxxxxxxxxx> wrote:

Hi Larry. 

I have done it before. how about not turning on htcondor gpu feature and use it as a normal server. For example, add a start statement that will only start jobs that specified +gpu=1. Now when the jobs start it will have access to all gpus. now you are in charge on how many processes will access each gpu. 

Keep in mind you must control it. 

 

David

 

 

 


From: HTCondor-users <htcondor-users-bounces@xxxxxxxxxxx> on behalf of Larry Martell <larry.martell@xxxxxxxxx>
Sent: Friday, February 23, 2024 5:10:43 PM
To: HTCondor-Users Mail List <htcondor-users@xxxxxxxxxxx>
Subject: Re: [HTCondor-users] Fractional GPU

 

Thanks, but that is only supported on NVIDIA H100, A100, and A30
Tensor Core GPUs - we don't have any of those.

On Fri, Feb 23, 2024 at 9:57âAM Matthew T West via HTCondor-users
<htcondor-users@xxxxxxxxxxx> wrote:
>
> Hi Larry,
>
> Have you investigated NVIDIA's MIG
> https://www.nvidia.com/en-gb/technologies/multi-instance-gpu/?
>
> AFAIK, if you partition the cards at boot into sub-units, HTCondor's GPU
> discovery will pick up each of those as distinct entities on the compute
> node. Would you always want them divided into 1/4s or does this need to
> be dynamic partitioning?
>
> Cheers,
> Matt
>
> Matthew T. West
> DevOps & HPC SysAdmin
> University of Exeter, Research IT
> http://www.exeter.ac.uk/research/researchcomputing/support/researchit
> 57 Laver Building, North Park Road, Exeter, EX4 4QE, United Kingdom
>
> On 22/02/2024 22:45, Larry Martell wrote:
> > CAUTION: This email originated from outside of the organisation. Do not click links or open attachments unless you recognise the sender and know the content is safe.
> >
> >
> > Proceeding under the assumption that condor does not directly support
> > fractional GPUs, I am trying what I read here:
> > https://www-auth.cs.wisc.edu/lists/htcondor-users/2020-December/msg00018.shtml:
> >
> >> You can get HTCondor to do this just by having the same device show up more than once in the device enumeration.
> >> For instance, if you have two GPUs and your configuration is
> >> MACHINE_RESOURCE_GPUS = CUDA0, CUDA1
> >> You can run two jobs on each GPU by configuring
> >> MACHINE_RESOURCE_GPUS = CUDA0, CUDA1, CUDA0, CUDA1
> > I have 1 GPU and this is what I have in my config file:
> >
> > #use feature:GPUs
> > #GPU_DISCOVERY_EXTRA = -extra
> > MACHINE_RESOURCE_GPUs = CUDA0, CUDA0, CUDA0, CUDA0
> >
> > and this env setting: CUDA_VISIBLE_DEVICES="0"
> >
> > But when I run multiple jobs requesting a GPU they run serially, not
> > in parallel.
> >
> > Has anyone been able to get something like this working?
> >
> > On Thu, Feb 22, 2024 at 3:53âPM Larry Martell <larry.martell@xxxxxxxxx> wrote:
> >> Does condor support fractional GPUs? I am setting request_GPUs = 0.25
> >> and it is matching (I can see that with -better-analyze and in the
> >> StartLog) but the job never runs, it stays in idle state.

_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/

 

_______________________________________________
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/htcondor-users/