[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [HTCondor-users] CondorCE: job transform for normalizing jobs' core/mem ratio?



Hi again,

I just stumbled over a (CMS ð) job, that looks somewhat odd [1]
regarding its requirements.
For one, the memory requirement seems not really limited with
RequestMemory derived from MemoryUsage, i.e., the a priori limit
depending on the later mem usage?

For the core requirements, I wonder why the values change between the
CondorCE view and the Condor view of the same job (especially since
condor_ce_history is just a wrapper around condor_history - I guess that
there is some transformation happening here somewhere, or?)

in the [condorce] view, the job comes with a cpu request of 1 - but then
the [condor] view of the same job morphed to 8 cores AFAIS? Glidein??
At the moment I do not see, how the RequestCpus_ce = 1 becomes
OriginalCpus = 8 (getting fed into RequestCpus_batch)

tbh I would prefer to strip of such dynamic behaviour in favour of a
one-to-one matching of resources.

Cheers,
  Thomas


[1]
  RequestMemory = ifthenelse(MemoryUsage =!=
undefined,MemoryUsage,(ImageSize + 1023) / 1024)
  RequestCpus = 1
  RequestDisk = DiskUsage
  MemoryUsage = ((ResidentSetSize + 1023) / 1024)

[condorce]
> grep Cpus
RequestCpus = 1

[condor]
> grep Cpus
CpusProvisioned = 8
GlideinCpusIsGood =  !isUndefined(MATCH_EXP_JOB_GLIDEIN_Cpus) &&
(int(MATCH_EXP_JOB_GLIDEIN_Cpus) =!= error)
JOB_GLIDEIN_Cpus = "$$(ifThenElse(WantWholeNode is true,
!isUndefined(TotalCpus) ? TotalCpus : JobCpus, OriginalCpus))"
JobCpus = JobIsRunning ? int(MATCH_EXP_JOB_GLIDEIN_Cpus) : OriginalCpus
JobIsRunning = (JobStatus =!= 1) && (JobStatus =!= 5) && GlideinCpusIsGood
OriginalCpus = 8
RequestCpus = ifThenElse(WantWholeNode =?= true, !isUndefined(TotalCpus)
? TotalCpus : JobCpus,OriginalCpus)
orig_RequestCpus = 1


On 04/08/2020 02.44, Antonio Perez-Calero Yzquierdo wrote:
> Hi Thomas,
> 
> See my comment below:
> 
> On Mon, Aug 3, 2020 at 10:50 AM Thomas Hartmann <thomas.hartmann@xxxxxxx
> <mailto:thomas.hartmann@xxxxxxx>> wrote:
> 
>     Hi Brian,
> 
>     yes, from the technical view you are absolutely right.
> 
>     My worries just go into the 'political direction' ;)
> 
>     So far, if a VO want's to run highmem jobs, i.e., core/mem < 1/2GB, they
>     have to scale by cores.
>     With cores and memory decoupled, I might worry, that we could become
>     more attractive to VOs to run their highmem jobs - and we starve in the
>     end there and have cores idleing, that are not accounted (and cause
>     discussions later on...)
>     Probably the primary 'issue' is, that AFAIS cores are somewhat the base
>     currency - in the end the 'relevant' pi charts are just about the
>     delivered core scaled walltime :-/
> 
> We have discussed in CMS several times the option of updating the
> "currency" as you named it, from CPU cores to the number of "unit cells"
> occupied by each jobs, when each "cell" is a multidimensional unit, e.g
> in 2D, CPU x memory, the unitÂcell being 1 CPU core x 2 GB. So each user
> would be charged on the basis of the max between the number of CPU cores
> and the number of 2 GB quanta employed. I condor terms (correct me if
> I'm wrong), that is managed by the slot weight, which can take such an
> expression as formula.Â
> 
> In fact, what we had in mind was somehow charging the "extra cost" to
> the user requesting more memory, to discourage such requests (=CPU is
> consumed faster => lower priority), but still keep the CPU core
> available for potential matchmaking, as Brian explained, to improve the
> overall utilization of the resources.
> 
> Despite discussions, we have not (yet) taken the steps to put this into
> effect as in the end the cases where jobs do require higher than
> standard memory/core are generally marginal. If they became more
> frequent, we'd look into this possibility.Â
> 
> I somehow feel the political side of things as you described it would
> still be complicated ;-)
> 
> Cheers,
> Antonio.Â
> 
> 
>     Cheers,
>     Â Thomas
> 
>     On 31/07/2020 20.58, Bockelman, Brian wrote:
>     > Hi Thomas,
>     >
>     > We do not normalize incoming requirements.
>     >
>     > In your example, I'm not sure if I'm following the benefit. You
>     are suggesting changing:
>     >
>     > 1 core / 8GB -> 4 core / 8 GB
>     >
>     > Right? To me, in that case, you now have 3 idle cores inside the
>     job - guaranteed to not be used - rather than 3 idle cores in condor
>     which possibly are not used unless another VO comes in with odd
>     requirements.
>     >
>     > Now, some sites *do* charge for jobs according to both memory and
>     CPU. So, in your case of 1 core / 2GB being nominal, they would
>     charge the user's fairshare for 4 units if the user submitted a 1
>     core / 8 GB job.
>     >
>     > Or am I looking at this from the wrong direction?
>     >
>     > Brian
>     >
>     >> On Jul 31, 2020, at 5:02 AM, Thomas Hartmann
>     <thomas.hartmann@xxxxxxx <mailto:thomas.hartmann@xxxxxxx>> wrote:
>     >>
>     >> Hi all,
>     >>
>     >> on your CondorCEs, do you normalize incoming jobs for their
>     core/memory
>     >> requirements?
>     >>
>     >> Thing is, that we normally assume a ratio of ~ 1core/2GB memory.
>     >> Now let's say a user/VO submits jobs with a skewed ration like
>     >> 1core/8GB. Which would probably lead to draining for memory and
>     leave a
>     >> few cores idle.
>     >> So, I had been thinking, if it might make sense to rescale a
>     job's core
>     >> or memory requirements in a transform to get the job close to the
>     >> implicitly assumed core/mem ratio.
>     >>
>     >> Does that make sense? ð
>     >>
>     >> Cheers,
>     >>Â Thomas
>     >>
>     >> _______________________________________________
>     >> HTCondor-users mailing list
>     >> To unsubscribe, send a message to
>     htcondor-users-request@xxxxxxxxxxx
>     <mailto:htcondor-users-request@xxxxxxxxxxx> with a
>     >> subject: Unsubscribe
>     >> You can also unsubscribe by visiting
>     >> https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
>     >>
>     >> The archives can be found at:
>     >> https://lists.cs.wisc.edu/archive/htcondor-users/
>     >
>     >
>     > _______________________________________________
>     > HTCondor-users mailing list
>     > To unsubscribe, send a message to
>     htcondor-users-request@xxxxxxxxxxx
>     <mailto:htcondor-users-request@xxxxxxxxxxx> with a
>     > subject: Unsubscribe
>     > You can also unsubscribe by visiting
>     > https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
>     >
>     > The archives can be found at:
>     > https://lists.cs.wisc.edu/archive/htcondor-users/
>     >
> 
>     _______________________________________________
>     HTCondor-users mailing list
>     To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx
>     <mailto:htcondor-users-request@xxxxxxxxxxx> with a
>     subject: Unsubscribe
>     You can also unsubscribe by visiting
>     https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
> 
>     The archives can be found at:
>     https://lists.cs.wisc.edu/archive/htcondor-users/
> 
> 
> 
> -- 
> Antonio Perez-Calero Yzquierdo, PhD
> CIEMAT & Port d'Informacià Cientifica, PIC.
> Campus Universitat Autonoma de Barcelona, Edifici D, E-08193 Bellaterra,
> Barcelona, Spain.
> Phone: +34 93 170 27 21
> 
> 
> _______________________________________________
> HTCondor-users mailing list
> To unsubscribe, send a message to htcondor-users-request@xxxxxxxxxxx with a
> subject: Unsubscribe
> You can also unsubscribe by visiting
> https://lists.cs.wisc.edu/mailman/listinfo/htcondor-users
> 
> The archives can be found at:
> https://lists.cs.wisc.edu/archive/htcondor-users/
> 
Arguments = "-v std -name gfactory_instance -entry CMSHTPC_T2_DE_DESY_grid-htcondorce -clientname CMSG-ITB_gWMSFrontend-v1_0.main -schedd schedd_glideins3@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -proxy None -factory OSG-ITB -web http://gfactory-itb-1.opensciencegrid.org/factory/stage -sign ea58a40142f73a03c85770e1f7c713c8adb57032 -signentry a5b7da8a672416c157c49a4a4a454477e8af0df3 -signtype sha1 -descript description.k84dxv.cfg -descriptentry description.k84dxv.cfg -dir Condor -param_GLIDEIN_Client CMSG-ITB_gWMSFrontend-v1_0.main -submitcredid 747660 -slotslayout fixed -clientweb http://vocms0802.cern.ch/vofrontend/stage -clientsign aeb61f1d55cff6fc87f400ed33dae5cf4266efdf -clientsigntype sha1 -clientdescript description.k83knV.cfg -clientgroup main -clientwebgroup http://vocms0802.cern.ch/vofrontend/stage/group_main -clientsigngroup b824450cef615edc3dd9534804191c9fcfe14abe -clientdescriptgroup description.k83knV.cfg -param_CONDOR_VERSION 8.dot,9.dot,7 -param_GLIDEIN_Glexec_Use OPTIONAL -param_CMS_GLIDEIN_VERSION 37 -param_GLIDEIN_Job_Max_Time 14400 -param_CONSUMPTION_POLICY FALSE -param_GLIDEIN_CLAIM_WORKLIFE_DYNAMIC cpus.star,.open,6.star,3600.close, -param_USE_PSS True -param_MEMORY_USAGE_METRIC .open,.open,ProportionalSetSizeKb.nbsp,.question,.colon,.nbsp,ResidentSetSize.close,.nbsp,.plus,.nbsp,1023.close,.nbsp,/.nbsp,1024 -param_GLIDEIN_CCB vocms0816.dot,cern.dot,ch.colon,9618.question,sock.eq,collector9621.minus,9720 -param_GLIDEIN_Max_Idle 1200 -param_GLIDEIN_Monitoring_Enabled False -param_GLIDEIN_Report_Failed NEVER -param_CONDOR_OS auto -param_UPDATE_COLLECTOR_WITH_TCP True -param_MIN_DISK_GBS 1 -param_GLIDEIN_Resource_Slots Iotokens.comma,80.comma,.comma,type.eq,main -param_GLIDECLIENT_ReqNode gfactory.minus,itb.minus,1.dot,opensciencegrid.dot,org -param_USE_MATCH_AUTH True -param_CONDOR_ARCH default -param_GLIDEIN_Max_Tail 1200 -param_GLIDEIN_Collector vocms0809.dot,cern.dot,ch.colon,9618.question,sock.eq,collector9621.minus,9720 -cluster 234853 -subcluster 0"
BlockReadKbytes = 0
BlockReads = 0
BlockWriteKbytes = 1432
BlockWrites = 281
BytesRecvd = 69546.0
BytesSent = 131321.0
CERequirements = "Walltime,CondorCE"
CPUsUsage = 0.0
ClusterId = 462
Cmd = "glidein_startup.sh"
CmdHash = "CmdMD5-8c9a6cab9b22fe4dc93548aac0528874"
CommittedSlotTime = 1520.0
CommittedSuspensionTime = 0
CommittedTime = 1520
CompletionDate = 1596578467
CondorCE = 1
CpusProvisioned = 8
CumulativeRemoteSysCpu = 12.0
CumulativeRemoteUserCpu = 469.0
CumulativeSlotTime = 1520.0
CumulativeSuspensionTime = 0
CurrentHosts = 0
DiskProvisioned = 65466
DiskUsage = 250
DiskUsage_RAW = 231
EncryptExecuteDirectory = false
EnteredCurrentStatus = 1596578467
Environment = "CONDORCE_COLLECTOR_HOST=grid-htcondorce0.desy.de:9619 HOME=/var/home/cmsplt000"
Err = "_condor_stderr"
ExecutableSize = 75
ExecutableSize_RAW = 75
ExitBySignal = false
ExitCode = 0
ExitStatus = 0
GlideinClient = "CMSG-ITB_gWMSFrontend-v1_0.main"
GlideinCpusIsGood =  !isUndefined(MATCH_EXP_JOB_GLIDEIN_Cpus) && (int(MATCH_EXP_JOB_GLIDEIN_Cpus) =!= error)
GlideinCredentialIdentifier = "747660"
GlideinEntryName = "CMSHTPC_T2_DE_DESY_grid-htcondorce"
GlideinEntrySubmitFile = "entry_CMSHTPC_T2_DE_DESY_grid-htcondorce/job.condor"
GlideinFactory = "OSG-ITB"
GlideinFrontendName = "CMSG-ITB_gWMSFrontend-v1_0:cmspilot"
GlideinLogNr = "20200804"
GlideinMaxWalltime = 171000
GlideinName = "gfactory_instance"
GlideinSecurityClass = "cmspilot"
GlideinSlotsLayout = "fixed"
GlideinWebBase = "http://gfactory-itb-1.opensciencegrid.org/factory/stage";
GlideinWorkDir = "Condor"
GlobalJobId = "grid-htcondorce0.desy.de#462.0#1596576926"
IOWait = 0.0
ImageSize = 100000
ImageSize_RAW = 99572
In = "/dev/null"
Iwd = "/var/lib/condor-ce/spool/461/0/cluster461.proc0.subproc0"
JOB_GLIDEIN_Cpus = "$$(ifThenElse(WantWholeNode is true, !isUndefined(TotalCpus) ? TotalCpus : JobCpus, OriginalCpus))"
JOB_GLIDEIN_Memory = "$$(TotalMemory:0)"
JobCpus = JobIsRunning ? int(MATCH_EXP_JOB_GLIDEIN_Cpus) : OriginalCpus
JobCurrentFinishTransferInputDate = 1596576949
JobCurrentFinishTransferOutputDate = 1596578467
JobCurrentStartDate = 1596576947
JobCurrentStartExecutingDate = 1596576949
JobCurrentStartTransferInputDate = 1596576948
JobCurrentStartTransferOutputDate = 1596578467
JobFinishedHookDone = 1596578467
JobIsRunning = (JobStatus =!= 1) && (JobStatus =!= 5) && GlideinCpusIsGood
JobLeaseDuration = 2400
JobMemory = JobIsRunning ? int(MATCH_EXP_JOB_GLIDEIN_Memory) * 95 / 100 : OriginalMemory
JobNotification = 0
JobPrio = 0
JobRunCount = 1
JobStartDate = 1596576947
JobStatus = 4
JobUniverse = 5
KillSig = "SIGTERM"
LastHoldReason = "Spooling input data files"
LastHoldReasonCode = 16
LastJobLeaseRenewal = 1596578467
LastJobStatus = 2
LastMatchTime = 1596576947
LastPublicClaimId = "<131.169.71.128:9620?addrs=131.169.71.128-9620+[2001-638-700-1047--1-80]-9620&alias=wn10-test.desy.de&noUDP&sock=startd_1858_030e>#1596200021#862#..."
LastRemoteHost = "slot1@xxxxxxxxxxxxxxxxx"
LastSuspensionTime = 0
LeaveJobInQueue = false
LocalSysCpu = 0.0
LocalUserCpu = 0.0
MATCH_EXP_JOB_GLIDEIN_Memory = "48124"
MATCH_TotalMemory = 48124
MachineAttrApelScaling0 = 145
MaxHosts = 1
MemoryProvisioned = 20096
MemoryUsage = ((ResidentSetSize + 1023) / 1024)
MinHosts = 1
MyType = "Job"
NiceUser = false
NumCkpts = 0
NumCkpts_RAW = 0
NumJobCompletions = 1
NumJobMatches = 1
NumJobStarts = 1
NumRestarts = 0
NumShadowStarts = 1
NumSystemHolds = 0
OnExitHold = ifThenElse(orig_OnExitHold =!= undefined,orig_OnExitHold,false) || ifThenElse(minWalltime =!= undefined && RemoteWallClockTime =!= undefined,RemoteWallClockTime < 60 * minWallTime,false)
OnExitHoldReason = ifThenElse((orig_OnExitHold =!= undefined) && orig_OnExitHold,ifThenElse(orig_OnExitHoldReason =!= undefined,orig_OnExitHoldReason,strcat("The on_exit_hold expression (",unparse(orig_OnExitHold),") evaluated to TRUE.")),ifThenElse(minWalltime =!= undefined && RemoteWallClockTime =!= undefined && (RemoteWallClockTime < 60 * minWallTime),strcat("The job's wall clock time, ",int(RemoteWallClockTime / 60),"min, is less than the minimum specified by the job (",minWalltime,")"),"Job held for unknown reason."))
OnExitHoldSubCode = ifThenElse((orig_OnExitHold =!= undefined) && orig_OnExitHold,ifThenElse(orig_OnExitHoldSubCode =!= undefined,orig_OnExitHoldSubCode,1),42)
OrigMaxHosts = 1
OriginalCpus = 8
OriginalMemory = 20000
Out = "_condor_stdout"
Owner = "cmsplt000"
ProcId = 0
QDate = 1596576926
Rank = 0.0
RecentBlockReadKbytes = 0
RecentBlockReads = 0
RecentBlockWriteKbytes = 328
RecentBlockWrites = 22
RecentStatsLifetimeStarter = 1200
ReleaseReason = "Data files spooled"
RemoteSysCpu = 12.0
RemoteUserCpu = 469.0
RemoteWallClockTime = 1520.0
Remote_JobUniverse = 5
RequestCpus = ifThenElse(WantWholeNode =?= true, !isUndefined(TotalCpus) ? TotalCpus : JobCpus,OriginalCpus)
RequestDisk = DiskUsage
RequestMemory = ifThenElse(WantWholeNode =?= true, !isUndefined(TotalMemory) ? TotalMemory * 95 / 100 : JobMemory,OriginalMemory)
Requirements = true
ResidentSetSize = 100000
ResidentSetSize_RAW = 99572
RootDir = "/"
RouteName = "DESY-HH-Condor-Grid"
RoutedBy = "htcondor-ce"
RoutedFromJobId = "461.0"
RoutedJob = true
SUBMIT_Cmd = "/var/lib/gwms-factory/work-dir/glidein_startup.sh"
SUBMIT_TransferOutputRemaps = "_condor_stdout=/var/log/gwms-factory/client/user_fecmsglobalitb/glidein_gfactory_instance/entry_CMSHTPC_T2_DE_DESY_grid-htcondorce/job.234853.0.out;_condor_stderr=/var/log/gwms-factory/client/user_fecmsglobalitb/glidein_gfactory_instance/entry_CMSHTPC_T2_DE_DESY_grid-htcondorce/job.234853.0.err;"
SUBMIT_x509userproxy = "/var/lib/gwms-factory/client-proxies/user_fecmsglobalitb/glidein_gfactory_instance/credential_CMSG-ITB_gWMSFrontend-v1_0.main_747660"
ScratchDirFileCount = 1536
ShouldTransferFiles = "IF_NEEDED"
SpooledOutputFiles = ""
StartdPrincipal = "execute-side@matchsession/131.169.71.128"
StatsLifetimeStarter = 1518
StreamErr = false
StreamOut = false
SubmitterGlobalJobId = "schedd_glideins3@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx#234853.0#1596576899"
SubmitterId = "schedd_glideins3@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
TargetType = "Machine"
TerminationPending = true
ToE = [ Who = "itself"; How = "OF_ITS_OWN_ACCORD"; HowCode = 0; When = 1596578467 ]
TotalSubmitProcs = 1
TotalSuspensions = 0
TransferIn = false
TransferInFinished = 1596576949
TransferInStarted = 1596576948
TransferInputSizeMB = 0
TransferOutFinished = 1596578467
TransferOutStarted = 1596578467
TransferOutput = ""
TransferOutputRemaps = undefined
User = "cmsplt000@xxxxxxx"
WallTime = ifThenElse(maxWallTime =!= undefined,60 * maxWallTime,ifThenElse(default_maxWallTime =!= undefined,60 * default_maxWallTime,60 * 4320))
WantCheckpoint = false
WantRemoteIO = true
WantRemoteSyscalls = false
WhenToTransferOutput = "ON_EXIT"
fename = "fecmsglobalitb"
maxMemory = 20000
maxWallTime = 2880
orig_RequestCpus = 1
orig_environment = ""
osg_environment = ""
remote_NodeNumber = 8
remote_OriginalMemory = 20000
remote_SMPGranularity = 8
remote_queue = ""
x509UserProxyExpiration = 1596808807
x509UserProxyFQAN = "/DC=ch/DC=cern/OU=computers/CN=cmspilot04/vocms080.cern.ch,/cms/Role=pilot/Capability=NULL,/cms/Role=NULL/Capability=NULL,/cms/dcms/Role=NULL/Capability=NULL,/cms/escms/Role=NULL/Capability=NULL,/cms/itcms/Role=NULL/Capability=NULL,/cms/local/Role=NULL/Capability=NULL,/cms/uscms/Role=NULL/Capability=NULL"
x509UserProxyFirstFQAN = "/cms/Role=pilot/Capability=NULL"
x509UserProxyVOName = "cms"
x509userproxy = "credential_CMSG-ITB_gWMSFrontend-v1_0.main_747660"
x509userproxysubject = "/DC=ch/DC=cern/OU=computers/CN=cmspilot04/vocms080.cern.ch"
xcount = 8

Arguments = "-v std -name gfactory_instance -entry CMSHTPC_T2_DE_DESY_grid-htcondorce -clientname CMSG-ITB_gWMSFrontend-v1_0.main -schedd schedd_glideins3@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -proxy None -factory OSG-ITB -web http://gfactory-itb-1.opensciencegrid.org/factory/stage -sign ea58a40142f73a03c85770e1f7c713c8adb57032 -signentry a5b7da8a672416c157c49a4a4a454477e8af0df3 -signtype sha1 -descript description.k84dxv.cfg -descriptentry description.k84dxv.cfg -dir Condor -param_GLIDEIN_Client CMSG-ITB_gWMSFrontend-v1_0.main -submitcredid 747660 -slotslayout fixed -clientweb http://vocms0802.cern.ch/vofrontend/stage -clientsign aeb61f1d55cff6fc87f400ed33dae5cf4266efdf -clientsigntype sha1 -clientdescript description.k83knV.cfg -clientgroup main -clientwebgroup http://vocms0802.cern.ch/vofrontend/stage/group_main -clientsigngroup b824450cef615edc3dd9534804191c9fcfe14abe -clientdescriptgroup description.k83knV.cfg -param_CONDOR_VERSION 8.dot,9.dot,7 -param_GLIDEIN_Glexec_Use OPTIONAL -param_CMS_GLIDEIN_VERSION 37 -param_GLIDEIN_Job_Max_Time 14400 -param_CONSUMPTION_POLICY FALSE -param_GLIDEIN_CLAIM_WORKLIFE_DYNAMIC cpus.star,.open,6.star,3600.close, -param_USE_PSS True -param_MEMORY_USAGE_METRIC .open,.open,ProportionalSetSizeKb.nbsp,.question,.colon,.nbsp,ResidentSetSize.close,.nbsp,.plus,.nbsp,1023.close,.nbsp,/.nbsp,1024 -param_GLIDEIN_CCB vocms0816.dot,cern.dot,ch.colon,9618.question,sock.eq,collector9621.minus,9720 -param_GLIDEIN_Max_Idle 1200 -param_GLIDEIN_Monitoring_Enabled False -param_GLIDEIN_Report_Failed NEVER -param_CONDOR_OS auto -param_UPDATE_COLLECTOR_WITH_TCP True -param_MIN_DISK_GBS 1 -param_GLIDEIN_Resource_Slots Iotokens.comma,80.comma,.comma,type.eq,main -param_GLIDECLIENT_ReqNode gfactory.minus,itb.minus,1.dot,opensciencegrid.dot,org -param_USE_MATCH_AUTH True -param_CONDOR_ARCH default -param_GLIDEIN_Max_Tail 1200 -param_GLIDEIN_Collector vocms0809.dot,cern.dot,ch.colon,9618.question,sock.eq,collector9621.minus,9720 -cluster 234854 -subcluster 0"
BufferBlockSize = 32768
BufferSize = 524288
BytesRecvd = 69546.0
BytesSent = 131176.0
ClusterId = 462
Cmd = "glidein_startup.sh"
CmdHash = "CmdMD5-8c9a6cab9b22fe4dc93548aac0528874"
CommittedSlotTime = 0
CommittedSuspensionTime = 0
CommittedTime = 0
CompletionDate = 1596579879
CoreSize = 0
CumulativeRemoteSysCpu = 0.0
CumulativeRemoteUserCpu = 0.0
CumulativeSlotTime = 0
CumulativeSuspensionTime = 0
CurrentHosts = 0
DiskUsage = 250
DiskUsage_RAW = 250
EncryptExecuteDirectory = false
EnteredCurrentStatus = 1596578418
Environment = ""
Err = "_condor_stderr"
ExecutableSize = 75
ExecutableSize_RAW = 75
ExitBySignal = false
ExitCode = 0
ExitStatus = 0
fename = "fecmsglobalitb"
GlideinClient = "CMSG-ITB_gWMSFrontend-v1_0.main"
GlideinCredentialIdentifier = "747660"
GlideinEntryName = "CMSHTPC_T2_DE_DESY_grid-htcondorce"
GlideinEntrySubmitFile = "entry_CMSHTPC_T2_DE_DESY_grid-htcondorce/job.condor"
GlideinFactory = "OSG-ITB"
GlideinFrontendName = "CMSG-ITB_gWMSFrontend-v1_0:cmspilot"
GlideinLogNr = "20200804"
GlideinMaxWalltime = 171000
GlideinName = "gfactory_instance"
GlideinSecurityClass = "cmspilot"
GlideinSlotsLayout = "fixed"
GlideinWebBase = "http://gfactory-itb-1.opensciencegrid.org/factory/stage";
GlideinWorkDir = "Condor"
GlobalJobId = "grid-htcondorce0.desy.de#462.0#1596577032"
ImageSize = 32500
ImageSize_RAW = 32500
In = "/dev/null"
Iwd = "/var/lib/condor-ce/spool/462/0/cluster462.proc0.subproc0"
JobCurrentStartDate = 1596578413
JobCurrentStartExecutingDate = 1596578414
JobFinishedHookDone = 1596579898
JobLeaseDuration = 2400
JobNotification = 0
JobPrio = 0
JobRunCount = 1
JobStartDate = 1596578413
JobStatus = 4
JobUniverse = 5
KillSig = "SIGTERM"
LastHoldReasonCode = 16
LastHoldReason = "Spooling input data files"
LastJobStatus = 2
LastSuspensionTime = 0
LeaveJobInQueue = false
LocalSysCpu = 0.0
LocalUserCpu = 0.0
ManagedManager = ""
Managed = "ScheddDone"
MaxHosts = 1
maxMemory = 20000
maxWallTime = 2880
MemoryUsage = ((ResidentSetSize + 1023) / 1024)
MinHosts = 1
MyType = "Job"
NiceUser = false
NumCkpts = 0
NumCkpts_RAW = 0
NumJobCompletions = 0
NumJobMatches = 1
NumJobStarts = 1
NumRestarts = 0
NumShadowStarts = 1
NumSystemHolds = 0
Out = "_condor_stdout"
Owner = "cmsplt000"
PeriodicRemove = (StageInFinish > 0) =!= true && time() > QDate + 28800
ProcId = 0
QDate = 1596577027
Rank = 0.0
ReleaseReason = "Data files spooled"
RemoteSysCpu = 12.0
RemoteUserCpu = 471.0
RemoteWallClockTime = 1466.0
RequestCpus = 1
RequestDisk = DiskUsage
RequestMemory = ifthenelse(MemoryUsage =!= undefined,MemoryUsage,(ImageSize + 1023) / 1024)
Requirements = true
ResidentSetSize = 32500
ResidentSetSize_RAW = 32500
RootDir = "/"
RoutedToJobId = "463.0"
ScratchDirFileCount = 1536
ShadowBday = 1596578413
ShouldTransferFiles = "IF_NEEDED"
SpooledOutputFiles = ""
StageInFinish = 1596577043
StageInStart = 1596577042
StageOutFinish = 1596579961
StageOutStart = 1596579959
StreamErr = false
StreamOut = false
SUBMIT_Cmd = "/var/lib/gwms-factory/work-dir/glidein_startup.sh"
SUBMIT_Iwd = "/var/lib/gwms-factory/work-dir"
SubmitterGlobalJobId = "schedd_glideins3@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx#234854.0#1596577020"
SubmitterId = "schedd_glideins3@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
SUBMIT_TransferOutputRemaps = "_condor_stdout=/var/log/gwms-factory/client/user_fecmsglobalitb/glidein_gfactory_instance/entry_CMSHTPC_T2_DE_DESY_grid-htcondorce/job.234854.0.out;_condor_stderr=/var/log/gwms-factory/client/user_fecmsglobalitb/glidein_gfactory_instance/entry_CMSHTPC_T2_DE_DESY_grid-htcondorce/job.234854.0.err;"
SUBMIT_x509userproxy = "/var/lib/gwms-factory/client-proxies/user_fecmsglobalitb/glidein_gfactory_instance/credential_CMSG-ITB_gWMSFrontend-v1_0.main_747660"
TargetType = "Machine"
TotalSubmitProcs = 1
TotalSuspensions = 0
TransferIn = false
TransferInputSizeMB = 0
TransferOutput = ""
TransferOutputRemaps = undefined
User = "cmsplt000@xxxxxxxxxxxxxxxxxx"
WantCheckpoint = false
WantRemoteIO = true
WantRemoteSyscalls = false
WhenToTransferOutput = "ON_EXIT"
x509userproxy = "credential_CMSG-ITB_gWMSFrontend-v1_0.main_747660"
x509UserProxyExpiration = 1596837606
x509UserProxyFirstFQAN = "/cms/Role=pilot/Capability=NULL"
x509UserProxyFQAN = "/DC=ch/DC=cern/OU=computers/CN=cmspilot04/vocms080.cern.ch,/cms/Role=pilot/Capability=NULL,/cms/Role=NULL/Capability=NULL,/cms/dcms/Role=NULL/Capability=NULL,/cms/escms/Role=NULL/Capability=NULL,/cms/itcms/Role=NULL/Capability=NULL,/cms/local/Role=NULL/Capability=NULL,/cms/uscms/Role=NULL/Capability=NULL"
x509userproxysubject = "/DC=ch/DC=cern/OU=computers/CN=cmspilot04/vocms080.cern.ch"
x509UserProxyVOName = "cms"
xcount = 8

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature