[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] Priority calculation: memory



Thanks Greg for this quick and precise answer, maybe we won't take the
risk to adjust that then.

Actually, we wonder how things will be with the partitionable slots.
>From what we understand:
  - a default max memory is allocated to the job if nothing special is
specified
  - if the job exceed this memory, the job is aborted

The cluster is composed of machines with very different caracteristics
(memory from [8G, 8 cores] to [192G, 16 cores]) so it's not easy to
setup a default memory.

What we are afraid of is that users, tired with having jobs aborted,
always request a very large amount of memory for their jobs.

Have we misunderstood something? Do you have some advice about that?

Cheers,
Mathieu

-- 
---------------------------------------------------------------------------------------
| Mathieu Bahin
| IE CNRS
|
| Institut de Biologie de l'Ecole Normale SupÃrieure (IBENS)
| Biocomp team
| 46 rue d'Ulm
| 75230 PARIS CEDEX 05
| 01.44.32.23.56
---------------------------------------------------------------------------------------