Hi!Thanks for the quick answer! Yes, that's more or less what we were intending to try with a "docker" wrapper, I think that the environment variables are already passed through, so the only thing missing should be the device files themselves.
I don't know if this should (or even could) be generalised to other kind of machine resources... or even non-NVIDIA GPU cards, since I don't know how "NVIDIA-specific" this would be... On the one hand, I like generic solutions, but I cannot think of a proper generic way of dealing with it.
But in short, this workaround would be enough for our current and foreseeable needs :)
Best, Joan On 08/25/2016 10:18 PM, Greg Thain wrote:
On 08/25/2016 10:03 AM, Joan Piles wrote:Hi, We have been trying to experiment with the docker universe, and have found that when a GPU job is requested, the required CUDA device files are not passed through to the container as required [1]Thank you for your interest in HTCondor and Docker Universe -- this is an interesting use case for us. This is correct, that the Docker universe in condor currently doesn't create GPU devices in the container. One quick workaround might be to have docker run map all of the GPU devices inside the container, and rely on the Condor fungible custom resources code to set the nvidia environment variable appropriate for the selected GPU(s). Would this work for your needs? -Greg _______________________________________________ HTCondor-users mailing list To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a subject: Unsubscribe You can also unsubscribe by visiting https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users The archives can be found at: https://lists.cs.wisc.edu/archive/htcondor-users/
-- Dr. Joan Piles ZWE Scientific Computing Max Planck Institute for Intelligent Systems (p) +49 7071 601 1750
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature