Re: [Gems-users] Low IPC in Opal


Date: Mon, 24 Nov 2008 17:49:29 +0100
From: "Francesco Panicucci" <panasa1@xxxxxxxxx>
Subject: Re: [Gems-users] Low IPC in Opal
On Mon, Nov 24, 2008 at 4:38 PM, Daniel Sánchez Pedreño
<sanatox@xxxxxxxxx> wrote:
> Dan,
>
> in order to discard memory as the cause of the low IPC I have configured the
> latencies to 1 cycle for hits or misses for all acceses. Still, the IPC
> continues extremely low.
>
> There are a few things in the code which I have changed to increase
> performance:
>
> RETIREMENT_CACHE_ACCESS = 0. If this flag is activated, any memory access
> performs two different requests to Ruby (with the addittional overhead).
> In pseq.C, in function postEvent, I have changed "cyclesInFuture - 1" by
> just "cyclesInFuture". If not, events are not woken up correctly. For
> example, if you want an instruction to be woken-up in 1 cycle, with the
> previous parameters, it will be woken 8 cycles later. This is curiously the
> time indicated by the variable EVENT_QUEUE_SIZE in scheduler.h
>
> With these changes I get an IPC around 0.88 for FFT in 1 processor.
>
> Has anybody experiment with Opal an SPLASH-2? If this IPC something usual?
> Could anybody point me some others benchmarks to obtaing a higher IPC?.
> Maybe I am doing something wrong. Could anybody send me a correct
> configuration file for Opal and Ruby?.
>
> Thanks
>

Hi,
I have done several simulations using SPLASH-2 suite, in details Ocean
and Barnes and I obtained the same result.
The opal statistics show the IPC is very low and the miss rate is high.
If I look at the ruby statistics, I observe better performances,
because it considers half cycles ( OPAL_RUBY_MULTIPLIER is 2 )


Francesco

>
>
> _______________________________________________
> Gems-users mailing list
> Gems-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> Use Google to search the GEMS Users mailing list by adding
> "site:https://lists.cs.wisc.edu/archive/gems-users/"; to your search.
>
>
>
[← Prev in Thread] Current Thread [Next in Thread→]