Nauman,
That is correct. Sorry about the logical inversion. :)
Yes, set the L2 hit time to be the SEQUENCER_TO_CONTROLLER_LATENCY. The
reason why instruction fetches are an exception is Opal assumes I-cache
hits are available in the current cycle. If you want a longer L1I
latency, a better way to add that latency is to add additional cycles to
Opal's fetch stage.
Yes, the split controllers of the CMP protocols are much more straight
forward. However, converting a SMP protocol to a CMP protocol takes some
non-trivial effort.
Brad
On Mon, 13 Jun 2005, Nauman Rafique wrote:
> Brad,
> I guess you meant REMOVE_SINGLE_CYCLE_DCACHE_FAST_PATH should be false,
> because we dont want to remove it if we want L1 hits to be faster.
> So I would have to make my L1 small enough to have a hit time of 1 cycle;
> and make SEQUENCER_TO_CONTROLLER_LATENCY equal to L2 hit time, right? Any
> reasons for making an exception for instruction fetches?
> Can we have two separate controllers for L1 and L2 as is done in CMP
> protocols? I guess that would be a cleaner approach??
> Thanks for your help here.
> Nauman
>
> ----- Original Message -----
> From: "Bradford Beckmann" <beckmann@xxxxxxxxxxx>
> To: "Gems Users" <gems-users@xxxxxxxxxxx>
> Sent: Monday, June 13, 2005 6:41 PM
> Subject: Re: [Gems-users] MOESI_SMP_directory problem
>
>
> >
> > Nauman,
> >
> > Basically that is correct for that particular protocol for only data
> > requests. The first transition made by the cache controller would be to
> > transfer a L2 block to the L1 D cache. Then the cache controller could
> > make a second transition in the same cycle that is the L1 hit for that
> > cache block. The implicit assumption for this protocol is the flag
> > REMOVE_SINGLE_CYCLE_DCACHE_FAST_PATH is set to true. Therefore L1 cache
> > hits will bypass the L1 cache controller and take only a single cycle,
> > while L2 cache hits will be placed on the mandatory queue and thus
> > encounter the SEQUENCER_TO_CONTROLLER_LATENCY on that MessageBuffer.
> >
> > So if (REMOVE_SINGLE_CYCLE_DCACHE_FAST_PATH == true), L1 hits will
> > encounter a faster latency than L1 miss-L2 hits.
> >
> > Brad
> >
> >
> >
> > On Mon, 13 Jun 2005, Nauman Rafique wrote:
> >
> > > I looked at this protocol and it looks like that L1 hit and L1 miss-L2
> hit would
> > > experience the same latency. What am I missing?
> > >
> > > Thanks.
> > > --
> > > Nauman
> > >
> > >
> > > _______________________________________________
> > > Gems-users mailing list
> > > Gems-users@xxxxxxxxxxx
> > > https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> > >
> > _______________________________________________
> > Gems-users mailing list
> > Gems-users@xxxxxxxxxxx
> > https://lists.cs.wisc.edu/mailman/listinfo/gems-users
>
> _______________________________________________
> Gems-users mailing list
> Gems-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
>
-----------------------------------------------------------------
Department of Computer Science Residence
University of Wisconsin
1210 W. Dayton St. #6366 918 E. Dayton St. #4
Madison, WI 53706 Madison, WI 53703
(608)265-2702 (608)442-0187
-----------------------------------------------------------------
|