lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 31 Jul 2017 12:34:07 -0700
From:   Shaohua Li <shli@...nel.org>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, Kernel-team@...com,
        Shaohua Li <shli@...com>, Wei Wang <weiwan@...gle.com>
Subject: Re: [RFC net-next] net ipv6: convert fib6_table rwlock to a percpu
 lock

On Mon, Jul 31, 2017 at 11:30:53AM -0700, Eric Dumazet wrote:
> On Mon, 2017-07-31 at 10:18 -0700, Shaohua Li wrote:
> > From: Shaohua Li <shli@...com>
> > 
> > In a syn flooding test, the fib6_table rwlock is a significant
> > bottleneck. While converting the rwlock to rcu sounds straighforward,
> > but is very challenging if it's possible. A percpu spinlock is quite
> > trival for this problem since updating the routing table is a rare
> > event. In my test, the server receives around 1.5 Mpps in syn flooding
> > test without the patch in a dual sockets and 56-CPU system. With the
> > patch, the server receives around 3.8Mpps, and perf report doesn't show
> > the locking issue.
> > 
> 
> > +static inline void fib6_table_write_lock_bh(struct fib6_table *table)
> > +{
> > +	int i;
> > +
> > +	spin_lock_bh(per_cpu_ptr(table->percpu_tb6_lock, 0));
> > +	for_each_possible_cpu(i) {
> > +		if (i == 0)
> > +			continue;
> > +		spin_lock_nest_lock(per_cpu_ptr(table->percpu_tb6_lock, i),
> > +			per_cpu_ptr(table->percpu_tb6_lock, 0));
> > +	}
> > +}
> 
> Your code assumes that cpu 0 is valid. 

This is unlikely not true especially for possible cpu map :)
> I would rather not hard code this knowledge.

Will fix it in next post

> Also this is not clear why you need the nested thing.

This is to avoid lockdep warnning. The locks have the same lockdep key, if we
don't use the nest lock annotation, lockdep assumes they are the same lock and
complain.

Thanks,
Shaohua

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ