[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1489683534.28631.231.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Thu, 16 Mar 2017 09:58:54 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Elena Reshetova <elena.reshetova@...el.com>
Cc: netdev@...r.kernel.org, bridge@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, kuznet@....inr.ac.ru,
jmorris@...ei.org, kaber@...sh.net, stephen@...workplumber.org,
peterz@...radead.org, keescook@...omium.org,
Hans Liljestrand <ishkamiel@...il.com>,
David Windsor <dwindsor@...il.com>
Subject: Re: [PATCH 07/17] net: convert sock.sk_refcnt from atomic_t to
refcount_t
On Thu, 2017-03-16 at 17:28 +0200, Elena Reshetova wrote:
> refcount_t type and corresponding API should be
> used instead of atomic_t when the variable is used as
> a reference counter. This allows to avoid accidental
> refcounter overflows that might lead to use-after-free
> situations.
...
> static __always_inline void sock_hold(struct sock *sk)
> {
> - atomic_inc(&sk->sk_refcnt);
> + refcount_inc(&sk->sk_refcnt);
> }
>
While I certainly see the value of these refcount_t, we have a very
different behavior on these atomic_inc() which were doing a single
inlined LOCK RMW on x86.
We now call an external function performing a
atomic_read(), various ops/tests, then atomic_cmpxchg_relaxed(), in a
loop, loosing the nice ability for x86 of preventing live locks.
Looks a lot of bloat, just to be able to chase hypothetical bugs in the
kernel.
I would love to have a way to enable extra debugging when I want a debug
kernel, like LOCKDEP or KASAN.
By adding all this bloat, we assert linux kernel is terminally buggy and
every atomic_inc() we did was suspicious, and need to be always
instrumented/validated.
Powered by blists - more mailing lists