Re: [Gems-users] cache stats


Date: Fri, 09 Nov 2007 11:36:09 +0100
From: Mladen Nikitovic <mladen@xxxxxx>
Subject: Re: [Gems-users] cache stats
Can anyone confirm this? Or, have anyone done this previously?

Would be very thankful for some feedback.

Regards,
Mladen

Mladen Nikitovic wrote:
Hi,

Just want to verify with you if my approach to capture L1 and UL2 cache statistics on a per-processor basis. Below is my augmented makeRequest from the Sequencer.C file. Assuming that I have declared the vectors correctly, would you say that this captures the accesses and hits correctly? Don't know yet how to get the UL2 hits and misses though...

I'm using MSI_MOSI_CMP_directory protocol  (GEMS 1.2)

Regards,
Mladen

// Called by Driver (Simics or Tester).
void Sequencer::makeRequest(const CacheMsg& request) {
int cpu = m_chip_ptr->getID()*RubyConfig::numberOfProcsPerChip()+m_version; assert(isReady(request));
  bool write = (request.getType() == CacheRequestType_ST) ||
    (request.getType() == CacheRequestType_ST_XACT) ||
    (request.getType() == CacheRequestType_LDX_XACT) ||
    (request.getType() == CacheRequestType_ATOMIC);

  if (TSO && (request.getPrefetch() == PrefetchBit_No) && write) {
    assert(m_chip_ptr->m_L1Cache_storeBuffer_vec[m_version]->isReady());
    m_chip_ptr->m_L1Cache_storeBuffer_vec[m_version]->insertStore(request);
    return;
  }

  bool hit = doRequest(request);
if(request.getType() == CacheRequestType_IFETCH) /* IL1 access */
    {
      perProcessorIL1Access[cpu]++;
      if(hit)
    perProcessorIL1Hit[cpu]++;
    }
  else /* DL1 access */
    {
      perProcessorDL1Access[cpu]++;
      if(hit)
    perProcessorDL1Hit[cpu]++;
      else
    perProcessorUL2Access[cpu]++;
    }
}

_______________________________________________
Gems-users mailing list
Gems-users@xxxxxxxxxxx
https://lists.cs.wisc.edu/mailman/listinfo/gems-users
Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/"; to your search.


[← Prev in Thread] Current Thread [Next in Thread→]