[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [HTCondor-users] question about accounting groups
- Date: Tue, 23 Mar 2021 16:46:03 +0100
- From: Jeff Templon <templon@xxxxxxxxx>
- Subject: Re: [HTCondor-users] question about accounting groups
Thanks for the hints! Do you understand why the Condor model seems to
be to handle all the accounting etc on the submit node? The Central
Manager is the thing that knows about the resources (and agrees to let
the Schedd contact it) so it should be the thing handling the
accounting, not the submit node. I guess another way to say it is, no
objection to letting the Schedd account something, but the definitive
accounting from a resource provider point of view should be with the
resource provider, not with the resource consumer.
Itâs like asking the customers to keep track of their account balance
- âplease upload your balance to the bank at the end of the monthâ.
If I think forward to a universe in which we have 5 ARC CEs, each
submitting to the same CM, then I have to collect accounting from 5
different submit nodes instead of the one central manager node. I must
be missing something.
On 23 Mar 2021, at 12:39, Stefano Dal Pra wrote:
On 23/03/21 12:04, Jeff Templon wrote:
I am looking into setting up accounting and plotting on our condor
setup.Â Weâve traditionally done this by unix groups, see this
For how the plots look now under the torque batch system â on the
top plot, right hand side, is a list of unix user groups and how much
of the system was used by each during the past 7 days, giving also
the color code legend for the plot, which is a stacked histogram of
the number of jobs running by each of those unix groups at each
Condor does not have, AFAICT, this concept of accounting by unix
groups - what I read is that the user needs to specify an accounting
group (completely unrelated to unix groups).Â How can I have this
automatically set to the unix group, except for cases where the user
I went through similar steps as you seem to be going now. With
HTCondor our current solution is to define a text mapfile filled with
lines like this:
* <username> <group>,<group>
* pilatlas011 atlas,atlas
In our case "atlas" is the main gid for the pilatlas022 user.
Then, in the configuration of the SCHEDD (aka Submit Node)
JOB_TRANSFORM_NAMES = $(JOB_TRANSFORM_NAMES) SetAccountingGroup
 This should be all you need. Â have been added here because of
an unexpected behaviour of # in some particular cases:
Having set that, running jobs should have the following Classad set:
AccountingGroup = "atlas.atlasprd011"
AcctGroup = "atlas"
AcctGroupUser = "atlasprd011"
If a user defines his own AcctGroup in the submit file, this should be
moved to "RequestedAcctGroup"
and AcctGroup should be set by .
We set  because in this particular case the AcctGroup remains
With this in place you can configure fairshare; mine is set as follow:
In the Central Manager:
PRIORITY_HALFLIFE = 26000
# Accept surplus and regroup
GROUP_ACCEPT_SURPLUS = true
#GROUP_AUTOREGROUP = false
DEFAULT_PRIO_FACTOR = 100000.0
include ifexist : /usr/share/htc/prod/conf/htc_shares.conf
Âhtc_shares.conf is script generated and it contains:
GROUP_NAMES = \
ÂÂÂÂ atlas, \
ÂÂÂÂ alice, \
ÂÂÂÂ belle, \
GROUP_QUOTA_DYNAMIC_belle = 0.041403
GROUP_QUOTA_DYNAMIC_cms = 0.154328
The total sum yelds 1.0 and these numbers are the "share" for that
Hope these notes and a bit of "Read That Fantastic Manual" should help
Also : we ultimately need to consider doing fair share on these same
unix groups - are the numbers going into the fair share calculations
the same set going into accounting?Â I would like to avoid setting
up parallel infrastructures for things that are identical.
Also : does the user have complete freedom to put any group they
want?Â I hope not; I would not want to have to police the system.
Not all groups have the same allocation here, and users are quite
opportunistic when theyâve found shortcuts to getting their jobs
HTCondor-users mailing list
To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx
You can also unsubscribe by visiting
The archives can be found at: