[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201123121259.312dcb82@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Mon, 23 Nov 2020 12:12:59 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>,
Yunsheng Lin <linyunsheng@...wei.com>
Cc: mingo@...hat.com, will@...nel.org, viro@...iv.linux.org.uk,
kyk.segfault@...il.com, davem@...emloft.net, linmiaohe@...wei.com,
martin.varghese@...ia.com, pabeni@...hat.com, pshelar@....org,
fw@...len.de, gnault@...hat.com, steffen.klassert@...unet.com,
vladimir.oltean@....com, edumazet@...gle.com, saeed@...nel.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linuxarm@...wei.com, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH net-next v2 1/2] lockdep: Introduce in_softirq lockdep
assert
On Mon, 23 Nov 2020 15:27:25 +0100 Peter Zijlstra wrote:
> On Sat, Nov 21, 2020 at 11:06:15AM +0800, Yunsheng Lin wrote:
> > The current semantic for napi_consume_skb() is that caller need
> > to provide non-zero budget when calling from NAPI context, and
> > breaking this semantic will cause hard to debug problem, because
> > _kfree_skb_defer() need to run in atomic context in order to push
> > the skb to the particular cpu' napi_alloc_cache atomically.
> >
> > So add the lockdep_assert_in_softirq() to assert when the running
> > context is not in_softirq, in_softirq means softirq is serving or
> > BH is disabled. Because the softirq context can be interrupted by
> > hard IRQ or NMI context, so lockdep_assert_in_softirq() need to
> > assert about hard IRQ or NMI context too.
> Due to in_softirq() having a deprication notice (due to it being
> awefully ambiguous), could we have a nice big comment here that explains
> in detail understandable to !network people (me) why this is actually
> correct?
>
> I'm not opposed to the thing, if that his what you need, it's fine, but
> please put on a comment that explains that in_softirq() is ambiguous and
> when you really do need it anyway.
One liner would be:
* Acceptable for protecting per-CPU resources accessed from BH
We can add:
* Much like in_softirq() - semantics are ambiguous, use carefully. *
IIUC we basically want to protect the nc array and counter here:
static inline void _kfree_skb_defer(struct sk_buff *skb)
{
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
/* drop skb->head and call any destructors for packet */
skb_release_all(skb);
/* record skb to CPU local list */
nc->skb_cache[nc->skb_count++] = skb;
#ifdef CONFIG_SLUB
/* SLUB writes into objects when freeing */
prefetchw(skb);
#endif
/* flush skb_cache if it is filled */
if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) {
kmem_cache_free_bulk(skbuff_head_cache, NAPI_SKB_CACHE_SIZE,
nc->skb_cache);
nc->skb_count = 0;
}
}
> > +#define lockdep_assert_in_softirq() \
> > +do { \
> > + WARN_ON_ONCE(__lockdep_enabled && \
> > + (!in_softirq() || in_irq() || in_nmi())); \
> > +} while (0)
Powered by blists - more mailing lists