[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1501571703.1876.24.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Tue, 01 Aug 2017 00:15:03 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Shaohua Li <shli@...nel.org>
Cc: Stephen Hemminger <stephen@...workplumber.org>,
netdev@...r.kernel.org, davem@...emloft.net, Kernel-team@...com,
Shaohua Li <shli@...com>, Wei Wang <weiwan@...gle.com>
Subject: Re: [RFC net-next] net ipv6: convert fib6_table rwlock to a percpu
lock
On Mon, 2017-07-31 at 19:57 -0700, Shaohua Li wrote:
> On Mon, Jul 31, 2017 at 04:10:07PM -0700, Stephen Hemminger wrote:
> > On Mon, 31 Jul 2017 10:18:57 -0700
> > Shaohua Li <shli@...nel.org> wrote:
> >
> > > From: Shaohua Li <shli@...com>
> > >
> > > In a syn flooding test, the fib6_table rwlock is a significant
> > > bottleneck. While converting the rwlock to rcu sounds straighforward,
> > > but is very challenging if it's possible. A percpu spinlock is quite
> > > trival for this problem since updating the routing table is a rare
> > > event. In my test, the server receives around 1.5 Mpps in syn flooding
> > > test without the patch in a dual sockets and 56-CPU system. With the
> > > patch, the server receives around 3.8Mpps, and perf report doesn't show
> > > the locking issue.
> > >
> > > Cc: Wei Wang <weiwan@...gle.com>
> >
> > You just reinvented brlock...
>
> you mean lglock? It has been removed from kernel.
>
> > RCU is not that hard, why not do it right?
>
> Maybe. But don't think it's the reason why we shouldn't do the percpu lock now,
> this is a simple change, if some smart guys find a way of RCU, we can easily
> remove this.
Make sure to test this on a 256 cpu host, dealing with ICMP messages a
lot.
percpu locks do not scale. This hack was okay last decade, sure, but it
is no longer a good hack.
I would rather focus on the RCU work, Wei is actively working on it.
Powered by blists - more mailing lists