[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160607.163731.1914725336655728632.davem@davemloft.net>
Date: Tue, 07 Jun 2016 16:37:31 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: edumazet@...gle.com
Cc: netdev@...r.kernel.org, xiyou.wangcong@...il.com, jhs@...atatu.com,
eric.dumazet@...il.com
Subject: Re: [PATCH net-next 0/2] net: sched: faster stats gathering
From: Eric Dumazet <edumazet@...gle.com>
Date: Mon, 6 Jun 2016 09:37:14 -0700
> A while back, I sent one RFC patch using lockless stats gathering
> on 64bit arches.
>
> This patch series does it more cleanly, using a seqcount.
>
> Since qdisc/class stats are written at dequeue() time,
> we can ask the dequeue to change the seqcount, so that
> stats readers can avoid taking the root qdisc lock,
> and instead the typical read_seqcount_{begin|retry} guarded
> loop.
>
> This does not change fast path costs, as the seqcount
> increments are not more expensive than the bit manipulation,
> and allows readers to not freeze the fast path anymore.
I like this, looks great!
Applied.
Powered by blists - more mailing lists