It’s my understanding that at each pass, the negotiator goes through each queued job to decide whether and which job slot to assign it.
I use the directive, Priority = 0-n to control which jobs run first but I also have 2 groups of jobs with disjoint IP address requirements to divide my jobs between two subsets of the condor pool.
My question is about the order and “completeness” with which the negotiator “examines” my queued jobs.
I presume that it goes through every single one of them regardless of how well and how many it has matched to job slots as it proceeds through.
I presume that it goes through them by “Priority” first and by fifo within each priority level.
I presume the time for each pass scales linearly as the total number of jobs queued.
Can the negotiator get clobbered by a memory overrun due a very large number of queued jobs?