Nauman,
Basically that is correct for that particular protocol for only data
requests. The first transition made by the cache controller would be to
transfer a L2 block to the L1 D cache. Then the cache controller could
make a second transition in the same cycle that is the L1 hit for that
cache block. The implicit assumption for this protocol is the flag
REMOVE_SINGLE_CYCLE_DCACHE_FAST_PATH is set to true. Therefore L1 cache
hits will bypass the L1 cache controller and take only a single cycle,
while L2 cache hits will be placed on the mandatory queue and thus
encounter the SEQUENCER_TO_CONTROLLER_LATENCY on that MessageBuffer.
So if (REMOVE_SINGLE_CYCLE_DCACHE_FAST_PATH == true), L1 hits will
encounter a faster latency than L1 miss-L2 hits.
Brad
On Mon, 13 Jun 2005, Nauman Rafique wrote:
> I looked at this protocol and it looks like that L1 hit and L1 miss-L2 hit would
> experience the same latency. What am I missing?
>
> Thanks.
> --
> Nauman
>
>
> _______________________________________________
> Gems-users mailing list
> Gems-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
>
|