[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230823125448.Q89O9wFB@linutronix.de>
Date: Wed, 23 Aug 2023 14:54:48 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Ratheesh Kannoth <rkannoth@...vell.com>
Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Geethasowjanya Akula <gakula@...vell.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Subbaraya Sundeep Bhatta <sbhatta@...vell.com>,
Sunil Kovvuri Goutham <sgoutham@...vell.com>,
Thomas Gleixner <tglx@...utronix.de>,
Hariprasad Kelam <hkelam@...vell.com>
Subject: Re: RE: [EXT] [BUG] Possible unsafe page_pool usage in octeontx2
On 2023-08-23 12:28:58 [+0000], Ratheesh Kannoth wrote:
> > From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> > Sent: Wednesday, August 23, 2023 3:18 PM
> > Subject: [EXT] [BUG] Possible unsafe page_pool usage in octeontx2
> >
> > This breaks in octeontx2 where a worker is used to fill the buffer:
> > otx2_pool_refill_task() -> otx2_alloc_rbuf() -> __otx2_alloc_rbuf() ->
> > otx2_alloc_pool_buf() -> page_pool_alloc_frag().
> >
> As I understand, the problem is due to workqueue may get scheduled on
> other CPU. If we use BOUND workqueue, do you think this problem can be
> solved ?
It would but is still open to less obvious races for instance if the
IRQ/ NAPI is assigned to another CPU while the workqueue is scheduled.
You would have to additional synchronisation to ensure that bad can
happen. This does not make it any simpler nor prettier or serves as a
good example.
I would suggest to stay away from the lock-less buffer if not in NAPI
and feed the pool->ring instead.
> > BH is disabled but the add of a page can still happen while NAPI callback runs
> > on a remote CPU and so corrupting the index/ array.
> >
> > API wise I would suggest to
> >
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c index
> > 7ff80b80a6f9f..b50e219470a36 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -612,7 +612,7 @@ __page_pool_put_page(struct page_pool *pool,
> > struct page *page,
> > page_pool_dma_sync_for_device(pool, page,
> > dma_sync_size);
> >
> > - if (allow_direct && in_softirq() &&
> > + if (allow_direct && in_serving_softirq() &&
> > page_pool_recycle_in_cache(page, pool))
> > return NULL;
> >
> > because the intention (as I understand it) is to be invoked from within the
> > NAPI callback (while softirq is served) and not if BH is just disabled due to a
> > lock or so.
> Could you help me understand where in_softirq() check will break ? If
> we TX a packet (dev_queue_xmit()) in
> Process context on same core, in_serving_softirq() check will prevent
> it from recycling ?
If a check is added to page_pool_alloc_pages() then it will trigger if
you fill the buffer from your ->ndo_open() callback.
Also, if you invoke dev_queue_xmit() from process context. But It will
be added to &pool->ring instead.
Sebastian
Powered by blists - more mailing lists