[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55DC8FFD.2020306@linux.vnet.ibm.com>
Date: Tue, 25 Aug 2015 21:25:41 +0530
From: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: davem@...emloft.net, kuznet@....inr.ac.ru, jmorris@...ei.org,
yoshfuji@...ux-ipv6.org, kaber@...sh.net, jiri@...nulli.us,
edumazet@...gle.com, hannes@...essinduktion.org,
tom@...bertland.com, azhou@...ira.com, ebiederm@...ssion.com,
ipm@...rality.org.uk, nicolas.dichtel@...nd.com,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
anton@....ibm.com, nacc@...ux.vnet.ibm.com,
srikar@...ux.vnet.ibm.com
Subject: Re: [PATCH RFC 0/2] Optimize the snmp stat aggregation for large
cpus
On 08/25/2015 08:03 PM, Eric Dumazet wrote:
> On Tue, 2015-08-25 at 13:24 +0530, Raghavendra K T wrote:
>> While creating 1000 containers, perf is showing lot of time spent in
>> snmp_fold_field on a large cpu system.
>>
>> The current patch tries to improve by reordering the statistics gathering.
>>
>> Please note that similar overhead was also reported while creating
>> veth pairs https://lkml.org/lkml/2013/3/19/556
>>
>> Setup:
>> 160 cpu (20 core) baremetal powerpc system with 1TB memory
>
> I wonder if these kind of results would demonstrate cache coloring
> problems on this host. Looks like all the per cpu data are colliding on
> same cache lines.
>
It could be. My testing on a 128 cpu system with less memory did not
incur huge time penalty for 1000 containers.
But snmp_fold_field in general had the problem.
for e.g. same experiment I had around 15% overhead for snmp_fold reduced
to 5% after the patch.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists