[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1440513231.8932.14.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Tue, 25 Aug 2015 07:33:51 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
Cc: davem@...emloft.net, kuznet@....inr.ac.ru, jmorris@...ei.org,
yoshfuji@...ux-ipv6.org, kaber@...sh.net, jiri@...nulli.us,
edumazet@...gle.com, hannes@...essinduktion.org,
tom@...bertland.com, azhou@...ira.com, ebiederm@...ssion.com,
ipm@...rality.org.uk, nicolas.dichtel@...nd.com,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
anton@....ibm.com, nacc@...ux.vnet.ibm.com,
srikar@...ux.vnet.ibm.com
Subject: Re: [PATCH RFC 0/2] Optimize the snmp stat aggregation for large
cpus
On Tue, 2015-08-25 at 13:24 +0530, Raghavendra K T wrote:
> While creating 1000 containers, perf is showing lot of time spent in
> snmp_fold_field on a large cpu system.
>
> The current patch tries to improve by reordering the statistics gathering.
>
> Please note that similar overhead was also reported while creating
> veth pairs https://lkml.org/lkml/2013/3/19/556
>
> Setup:
> 160 cpu (20 core) baremetal powerpc system with 1TB memory
I wonder if these kind of results would demonstrate cache coloring
problems on this host. Looks like all the per cpu data are colliding on
same cache lines.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists