Hi Alberto,
Thank you for the reply!
We have the next results for L1D misses:
1 cycle : 190259,5
8 cycles : 218379,6
16 cycles : 221789,5
32 cycles : 227906,2
64 cycles : 261535,3
128 cycles: 141103,1
And below you can see the number of instruccions for each case:
1 cycle : 192366546,8
8 cycles : 231765338,5
16 cycles : 288945812,1
32 cycles : 403344963
64 cycles : 661415471
128 cycles: 1020236240
Do you think is this a normal behavior?
Thanks,
Jesús.
> Hi Jesus,
>
> Regarding the drop in the number of messages when going to latencies of
> 128 cycles, my opinion is that maybe the number of cache misses is
> reduced, particularly, the number of coherence misses. Being the latency
> longer, caches could hold contended blocks more time, thus reducing miss
> rate. You could check the L1 mis rate and, in particular, coherence
> misses. However, latencies are longer and that's why execution time is
> worse.
>
> Alberto.
>
> Jesus Camacho Villanueva escribió:
>> Hi everybody!
>>
>> We are simulating several applications in a sarek machine and using the
>> MESI_CMP_directory protocol.
>>
>> At first, we simulated barnes and raytrace applications with different
>> topologies and we observed that the average (from 10 seeds) amount of
>> messages that were injected into the network differed in some cases more
>> than 50%!!! depending on the topology used.
>>
>> Recently, we have run an FFT application with a basic network model that
>> introduces a constant message delay (that means we simulate the same
>> latency for each message going from any node to any other node) and we
>> have obtained the following results:
>>
>> Constant latency (cycles) - execution time (cycles) - number of messages
>> - page_reclaims
>> 1 - 3361858,8 - 830518 - 1018051,1
>> 8 - 4346059,2 - 945225,1 - 1132667,2
>> 16 - 5685143,7 - 958207,5 - 1145672,2
>> 32 - 8410379,9 - 977663,1 - 1165158,3
>> 64 - 14785582,6 - 1109518,3 - 1297078,7
>> 128 - 20265856,9 - 594415,7 - 781911,1
>>
>> 10 different seeds have been used and average results plotted. As you
>> see, the execution time goes always up which is normal (the network
>> delay increases). We figure out the slight increase in the number of
>> messages and pages accessed is normal since the application is taking
>> more time. However, the severe drop when going from 64 cycles to 128
>> cycles was totally unexpected and we didn't find any reasonable
>> explanation, so we figure out there should be something wrong. The funny
>> thing is that the application finishes with half the pages claimed.
>>
>> Anyone knows what it could be happening?
>>
>> Any help will be really appreciated, since we are blocked with this
>> issue!
>>
>> Thanks a lot,
>>
>> Jesús.
>> _______________________________________________
>> Gems-users mailing list
>> Gems-users@xxxxxxxxxxx
>> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
>> Use Google to search the GEMS Users mailing list by adding
>> "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
>>
>>
>>
>
> --
> /*-----------------------------------------------------------------*/
> /* Alberto Ros Bardisa */
> /* Departamento de Ingeniería y Tecnología de Computadores */
> /* Facultad de Informática. Universidad de Murcia */
> /* Campus de Espinardo - 30100 Murcia (SPAIN) */
> /* Tel.: +34 868 888518 Fax: +34 868 884151 */
> /* email: a.ros@xxxxxxxxxxx */
> /* Web Page: http://skywalker.inf.um.es/~aros */
> /*-----------------------------------------------------------------*/
>
> _______________________________________________
> Gems-users mailing list
> Gems-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> Use Google to search the GEMS Users mailing list by adding
> "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
>
>
|