lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 19 May 2016 09:08:05 -0700
From:	Alexei Starovoitov <alexei.starovoitov@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev <netdev@...r.kernel.org>,
	Jamal Hadi Salim <jhs@...atatu.com>,
	John Fastabend <john.fastabend@...il.com>,
	Kevin Athey <kda@...gle.com>,
	Xiaotian Pei <xiaotian@...gle.com>
Subject: Re: [RFC net-next] net: sched: do not acquire qdisc spinlock in
 qdisc/class stats dump

On Thu, May 19, 2016 at 05:35:20AM -0700, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@...gle.com>
> 
> Large tc dumps (tc -s {qdisc|class} sh dev ethX) done by Google BwE host
> agent [1] are problematic at scale :
>     
> For each qdisc/class found in the dump, we currently lock the root qdisc
> spinlock in order to get stats. Sampling stats every 5 seconds from
> thousands of HTB classes is a challenge when the root qdisc spinlock is
> under high pressure.
> 
> These stats are using u64 or u32 fields, so reading integral values
> should not prevent writers from doing concurrent updates if the kernel
> arch is a 64bit one.
> 
> Being able to atomically fetch all counters like packets and bytes sent
> at the expense of interfering in fast path (queue and dequeue packets)
> is simply not worth the pain, as the values are generally stale after 1
> usec.
> 
> These lock acquisitions slow down the fast path by 10 to 20 %
> 
> An audit of existing qdiscs showed that sch_fq_codel is the only qdisc
> that might need the qdisc lock in fq_codel_dump_stats() and
> fq_codel_dump_class_stats()
> 
> gnet_dump_force_lock() call is added there and could be added to other
> qdisc stat handlers if needed.
> 
> [1]
> http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43838.pdf
> 
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Jamal Hadi Salim <jhs@...atatu.com>
> Cc: John Fastabend <john.fastabend@...il.com>
> Cc: Kevin Athey <kda@...gle.com>
> Cc: Xiaotian Pei <xiaotian@...gle.com>

good optimization.
Acked-by: Alexei Starovoitov <ast@...nel.org>

> +static inline spinlock_t *qdisc_stats_lock(const struct Qdisc *qdisc)
> +{
> +	ASSERT_RTNL();
> +#if defined(CONFIG_64BIT) && !defined(CONFIG_LOCKDEP)
> +	/* With u32/u64 bytes counter, there is no real issue on 64bit arches */
> +	return NULL;
> +#else
> +	return qdisc_lock(qdisc_root_sleeping(qdisc));

initially I thought that above line could have been qdisc_root_sleeping_lock()
but then realized that moving out ASSERT_RTNL makes more sense. Good call.
The only thing not clear to me is why '!defined(CONFIG_LOCKDEP)' ?
Just extra caution? I think should be fine without it.

Powered by blists - more mailing lists