[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1501525853.1876.22.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Mon, 31 Jul 2017 11:30:53 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Shaohua Li <shli@...nel.org>
Cc: netdev@...r.kernel.org, davem@...emloft.net, Kernel-team@...com,
Shaohua Li <shli@...com>, Wei Wang <weiwan@...gle.com>
Subject: Re: [RFC net-next] net ipv6: convert fib6_table rwlock to a percpu
lock
On Mon, 2017-07-31 at 10:18 -0700, Shaohua Li wrote:
> From: Shaohua Li <shli@...com>
>
> In a syn flooding test, the fib6_table rwlock is a significant
> bottleneck. While converting the rwlock to rcu sounds straighforward,
> but is very challenging if it's possible. A percpu spinlock is quite
> trival for this problem since updating the routing table is a rare
> event. In my test, the server receives around 1.5 Mpps in syn flooding
> test without the patch in a dual sockets and 56-CPU system. With the
> patch, the server receives around 3.8Mpps, and perf report doesn't show
> the locking issue.
>
> +static inline void fib6_table_write_lock_bh(struct fib6_table *table)
> +{
> + int i;
> +
> + spin_lock_bh(per_cpu_ptr(table->percpu_tb6_lock, 0));
> + for_each_possible_cpu(i) {
> + if (i == 0)
> + continue;
> + spin_lock_nest_lock(per_cpu_ptr(table->percpu_tb6_lock, i),
> + per_cpu_ptr(table->percpu_tb6_lock, 0));
> + }
> +}
Your code assumes that cpu 0 is valid.
I would rather not hard code this knowledge.
Also this is not clear why you need the nested thing.
Powered by blists - more mailing lists