[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Condor-users] Is there a way to create a slot per a core (the goal is not to share the core cache between different jobs)



Unfortunately I'm not sure Val, I just remember seeing a number of threads mentioning it recently. You may find it works on linux but not on windows or something similar.

If you look back in the condor-users archive you might be able to dig something up


Tom
________________________________________
From: condor-users-bounces@xxxxxxxxxxx [condor-users-bounces@xxxxxxxxxxx] On Behalf Of Val Giner [valginer@xxxxxxxxxxxxxxxx]
Sent: 27 April 2012 17:20
To: Condor-Users Mail List
Subject: Re: [Condor-users] Is there a way to create a slot per a core (the goal is not to share the core cache between different jobs)

Tom,

Thank you, I'll try that.

BTW,  in case  it is known, when you are saying "talk ... about ... not
working correctly" was that about particular versions?  We have a few
different versions working on more than one pool.

Thanks,
Val

On 4/27/2012 12:03 PM, Thomas Luff wrote:
> Hi Val,
>
> Try setting COUNT_HYPERTHREAD_CPUS=False in your config file, this should tell condor not to create job slots for virtual cores.
>
> There has been a bit of talk recently about it not working correctly though so you might need to manually override it by settings NUM_CPUS = 4 (or however many slots you require)
>
> Tom
> ________________________________________
> From: condor-users-bounces@xxxxxxxxxxx [condor-users-bounces@xxxxxxxxxxx] On Behalf Of Val Giner [valginer@xxxxxxxxxxxxxxxx]
> Sent: 27 April 2012 16:45
> To: condor-users@xxxxxxxxxxx
> Subject: [Condor-users] Is there a way to create a slot per a core (the goal is not to share the core cache between different jobs)
>
> Hello everyone,
>
> Does any one know whether there is a way to configure Condor on
> multi-core Linux servers in such a way that there is only one slot per
> core exists?
>
> My goal is to make sure that core caches are not shared between
> different jobs.
>
> E.g., if there are two "physical treads" (hyper-threads)  per a core and
> there is a slot per the 'physical thread', the slots could be allocated
> to two different jobs, which means they are going to share the same cache.
>
> If I am making a mistake somewhere, please correct me
>
> --
> Thanks,
> Val
>
> _______________________________________________
> Condor-users mailing list
> To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
> subject: Unsubscribe
> You can also unsubscribe by visiting
> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>
> The archives can be found at:
> https://lists.cs.wisc.edu/archive/condor-users/
>
> -- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium.  Thank you.
>
> _______________________________________________
> Condor-users mailing list
> To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
> subject: Unsubscribe
> You can also unsubscribe by visiting
> https://lists.cs.wisc.edu/mailman/listinfo/condor-users
>
> The archives can be found at:
> https://lists.cs.wisc.edu/archive/condor-users/


--
Thanks,
Val

_______________________________________________
Condor-users mailing list
To unsubscribe, send a message to condor-users-request@xxxxxxxxxxx with a
subject: Unsubscribe
You can also unsubscribe by visiting
https://lists.cs.wisc.edu/mailman/listinfo/condor-users

The archives can be found at:
https://lists.cs.wisc.edu/archive/condor-users/


-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium.  Thank you.