[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4fcefc0e43a5448292b7b40f32c950de@BLUPR03MB373.namprd03.prod.outlook.com>
Date: Mon, 23 Dec 2013 01:08:05 +0000
From: "fugang.duan@...escale.com" <fugang.duan@...escale.com>
To: Hector Palacios <hector.palacios@...i.com>,
Marek Vasut <marex@...x.de>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: "Fabio.Estevam@...escale.com" <Fabio.Estevam@...escale.com>,
"shawn.guo@...aro.org" <shawn.guo@...aro.org>,
"l.stach@...gutronix.de" <l.stach@...gutronix.de>,
"Frank.Li@...escale.com" <Frank.Li@...escale.com>,
"bhutchings@...arflare.com" <bhutchings@...arflare.com>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: RE: FEC performance degradation with certain packet sizes
From: Hector Palacios <hector.palacios@...i.com>
Data: Friday, December 20, 2013 11:02 PM
>To: Duan Fugang-B38611; Marek Vasut; netdev@...r.kernel.org
>Cc: Estevam Fabio-R49496; shawn.guo@...aro.org; l.stach@...gutronix.de; Li
>Frank-B20596; bhutchings@...arflare.com; davem@...emloft.net
>Subject: Re: FEC performance degradation with certain packet sizes
>
>Dear Andy,
>
>On 12/20/2013 04:35 AM, fugang.duan@...escale.com wrote:
>> [...]
>>
>> I can reproduce the issue on imx6q/dl platform with freescale internal kernel
>tree.
>>
>> This issue must be related to cpufreq, when set scaling_governor to
>performance:
>> echo performance >
>> /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
>>
>> And then do NPtcp test, the result as below:
>>
>> 24: 99 bytes 5 times --> 9.89 Mbps in 76.40 usec
>> 25: 125 bytes 5 times --> 12.10 Mbps in 78.80 usec
>> 26: 128 bytes 5 times --> 12.27 Mbps in 79.60 usec
>> 27: 131 bytes 5 times --> 12.80 Mbps in 78.10 usec
>> 28: 189 bytes 5 times --> 18.00 Mbps in 80.10 usec
>> 29: 192 bytes 5 times --> 18.31 Mbps in 80.00 usec
>> 30: 195 bytes 5 times --> 18.41 Mbps in 80.80 usec
>> 31: 253 bytes 5 times --> 23.34 Mbps in 82.70 usec
>> 32: 256 bytes 5 times --> 23.91 Mbps in 81.70 usec
>> 33: 259 bytes 5 times --> 24.19 Mbps in 81.70 usec
>> 34: 381 bytes 5 times --> 33.18 Mbps in 87.60 usec
>> 35: 384 bytes 5 times --> 33.87 Mbps in 86.50 usec
>> 36: 387 bytes 5 times --> 34.41 Mbps in 85.80 usec
>> 37: 509 bytes 5 times --> 42.72 Mbps in 90.90 usec
>> 38: 512 bytes 5 times --> 42.60 Mbps in 91.70 usec
>> 39: 515 bytes 5 times --> 42.80 Mbps in 91.80 usec
>> 40: 765 bytes 5 times --> 56.45 Mbps in 103.40 usec
>> 41: 768 bytes 5 times --> 57.11 Mbps in 102.60 usec
>> 42: 771 bytes 5 times --> 57.22 Mbps in 102.80 usec
>> 43: 1021 bytes 5 times --> 70.69 Mbps in 110.20 usec
>> 44: 1024 bytes 5 times --> 70.70 Mbps in 110.50 usec
>> 45: 1027 bytes 5 times --> 69.59 Mbps in 112.60 usec
>> 46: 1533 bytes 5 times --> 73.56 Mbps in 159.00 usec
>> 47: 1536 bytes 5 times --> 72.92 Mbps in 160.70 usec
>> 48: 1539 bytes 5 times --> 73.80 Mbps in 159.10 usec
>> 49: 2045 bytes 5 times --> 93.59 Mbps in 166.70 usec
>> 50: 2048 bytes 5 times --> 94.07 Mbps in 166.10 usec
>> 51: 2051 bytes 5 times --> 92.92 Mbps in 168.40 usec
>> 52: 3069 bytes 5 times --> 123.43 Mbps in 189.70 usec
>> 53: 3072 bytes 5 times --> 123.68 Mbps in 189.50 usec
>
>You are right. Unfortunately, this does not work on i.MX28 (at least for me).
>Couldn't it be that the cpufreq is masking the problem on the i.MX6?
>
>Best regards,
>--
>Hector Palacios
>
I will test it on imx28 platform. And then analyze the result.
Thanks,
Andy
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists