I'll add that not all of these vectors are used by all protocols. They are there to deal with various SLICC components -- different ones will populate different vectors.
For example, some protocols (the SMP ones) implement both the L1 and L2 caches in a machine called L1Cache. These protocols will use m_L1Cache_L2cacheMemory_vec. Other protocols have a separate state machine for the L2, called L2Cache -- these will use the m_L2Cache_L2cacheMemory_vec. The names are derived from the machine name, followed by the variable name for the cache memory object inside the machine.
It takes a while to figure out all the details of how SLICC and Ruby interact, but slogging through the code is the best way to do it.
...Greg
On Thu, May 12, 2011 at 11:43 AM, Venkatanathan Varadarajan <venkatv@xxxxxxxxxxx> wrote:
Even if the access to the L2 is always going to be a hit (by design
of whatever you are doing) it is not a good idea to use
tryCacheAccess() method of the cache memory.
Reason: tryCacheAccess() is going to return a hit if it is available
in the cache at that instant when the method is called (it need not
be the case that a miss when you do a tryCacheAccess() it is a miss
in L2 cache - it is the way ruby (Sequencer) is implemented)
As far as I know, what ever needs to be sent to the L2 (any request)
needs to go through L1 (through the slicc interface). There are work
around that might require changes to the slicc protocol definition.
Overall, I think you might need to get your hands dirty by diving
into slicc (I am not sure if there is another way of dealing with
this, correct me if I am wrong).
The declarations are the ones that are generated by the slicc code.
These objects model the cache memory.
Regards,
Venkat
On 05/11/2011 09:33 PM, Binh Q. Pham wrote:
Hi Venkat,
Thanks for a fast response. Do you know what these declarations
mean?
// pulic data structures
Vector < CacheMemory<L1Cache_Entry>* > m_L1Cache_L1DcacheMemory_vec;
Vector < CacheMemory<L1Cache_Entry>* > m_L1Cache_L1IcacheMemory_vec;
Vector < CacheMemory<L1Cache_Entry>* > m_L1Cache_cacheMemory_vec;
Vector < CacheMemory<L1Cache_Entry>* > m_L1Cache_L2cacheMemory_vec;
Vector < CacheMemory<L1Cache_Entry>* > m_L2Cache_L2cacheMemory_vec;
About what I want to do: for a certain physical address, I don't want to use it to access L1 cache; instead I want to go to L2 cache directly.
Binh
On 05/11/2011 12:32 PM, Venkatanathan Varadarajan wrote:
The answer to your question depends more on what you are going
to by pass (is it in the protocol level or just direct access to
data present in L2)?
The memory subsystem simulated by ruby is very much tied to the
slicc interface (starting from L1, the control is taken over by
the slicc interface (the slicc generated code) from the
system/Sequencer).
-Venkat
On 05/11/2011 11:07 AM, Binh Q. Pham wrote:
Hi,
In ruby/slicc_interface/AbstractChip.h, I saw the following declaration:
// pulic data structures
Vector < CacheMemory<L1Cache_Entry>* > m_L1Cache_L1DcacheMemory_vec;
Vector < CacheMemory<L1Cache_Entry>* > m_L1Cache_L1IcacheMemory_vec;
Vector < CacheMemory<L1Cache_Entry>* > m_L1Cache_cacheMemory_vec;
Vector < CacheMemory<L1Cache_Entry>* > m_L1Cache_L2cacheMemory_vec;
Vector < CacheMemory<L1Cache_Entry>* > m_L2Cache_L2cacheMemory_vec;
For the first two declarations, I guess they are L1
data cache and L1instruction
cache respectively. However, I don't understand the
otherthree
declarations. Does anyone know what is going on here?My
goal is to bypass L1 cache for a certain request, and
pass itdirectly
to L2 cache. I am thinking I can call the functiontryCacheAccess
in ruby/system/Sequencer.C on a L2 cache object to dothis.
Am I on the right track here?