[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Udz74tvTL9TfT4boajCFpAog4juJjW83pxEvQ7RNMFGDw@mail.gmail.com>
Date: Tue, 25 Jul 2023 17:02:42 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, corbet@....net, linux-doc@...r.kernel.org
Subject: Re: [PATCH net] docs: net: clarify the NAPI rules around XDP Tx
On Tue, Jul 25, 2023 at 1:41 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Tue, 25 Jul 2023 13:10:18 -0700 Alexander Duyck wrote:
> > On Tue, Jul 25, 2023 at 11:55 AM Jakub Kicinski <kuba@...nel.org> wrote:
> > > > This isn't accurate, and I would say it is somewhat dangerous advice.
> > > > The Tx still needs to be processed regardless of if it is processing
> > > > page_pool pages or XDP pages. I agree the Rx should not be processed,
> > > > but the Tx must be processed using mechanisms that do NOT make use of
> > > > NAPI optimizations when budget is 0.
> > > >
> > > > So specifically, xdp_return_frame is safe in non-NAPI Tx cleanup. The
> > > > xdp_return_frame_rx_napi is not.
> > > >
> > > > Likewise there is napi_consume_skb which will use either a NAPI or non-
> > > > NAPI version of things depending on if budget is 0 or not.
> > > >
> > > > For the page_pool calls there is the "allow_direct" argument that is
> > > > meant to decide between recycling in directly into the page_pool cache
> > > > or not. It should only be used in the Rx handler itself when budget is
> > > > non-zero.
> > > >
> > > > I realise this was written up in response to a patch on the Mellanox
> > > > driver. Based on the patch in question it looks like they were calling
> > > > page_pool_recycle_direct outside of NAPI context. There is an explicit
> > > > warning above that function about NOT calling it outside of NAPI
> > > > context.
> > >
> > > Unless I'm missing something budget=0 can be called from hard IRQ
> > > context. And page pool takes _bh() locks. So unless we "teach it"
> > > not to recycle _anything_ in hard IRQ context, it is not safe to call.
> >
> > That is the thing. We have to be able to free the pages regardless of
> > context. Otherwise we make a huge mess of things. Also there isn't
> > much way to differentiate between page_pool and non-page_pool pages
> > because an skb can be composed of page pool pages just as easy as an
> > XDP frame can be. All you would just have to enable routing or
> > bridging for Rx frames to end up with page pool pages in the Tx path.
> >
> > As far as netpoll itself we are safe because it has BH disabled and so
>
> We do? Can you point me to where netpoll disables BH?
I misread the code. Basically it looks like netconsole is explicitly
disabling interrupts via spin_lock_irqsave in write_msg is what is
going on.
> > as a result page_pool doesn't use the _bh locks. There is code in
> > place to account for that in the producer locking code, and if it were
> > an issue we would have likely blown up long before now. The fact is
> > that page_pool has proliferated into skbs, so you are still freeing
> > page_pool pages indirectly anyway.
> >
> > That said, there are calls that are not supposed to be used outside of
> > NAPI context, such as page_pool_recycle_direct(). Those have mostly
> > been called out in the page_pool.h header itself, so if someone
> > decides to shoot themselves in the foot with one of those, that is on
> > them. What we need to watch out for are people abusing the "direct"
> > calls and such or just passing "true" for allow_direct in the
> > page_pool calls without taking proper steps to guarantee the context.
> >
> > > > We cannot make this distinction if both XDP and skb are processed in
> > > > the same Tx queue. Otherwise you will cause the Tx to stall and break
> > > > netpoll. If the ring is XDP only then yes, it can be skipped like what
> > > > they did in the Mellanox driver, but if it is mixed then the XDP side
> > > > of things needs to use the "safe" versions of the calls.
> > >
> > > IDK, a rare delay in sending of a netpoll message is not a major
> > > concern.
> >
> > The whole point of netpoll is to get data out after something like a
> > crash. Otherwise we could have just been using regular NAPI. If the Tx
> > ring is hung it might not be a delay but rather a complete stall that
> > prevents data on the Tx queue from being transmitted on since the
> > system will likely not be recovering. Worse yet is if it is a scenario
> > where the Tx queue can recover it might trigger the Tx watchdog since
> > I could see scenarios where the ring fills, but interrupts were
> > dropped because of the netpoll.
>
> I'm not disagreeing with you. I just don't have time to take a deeper
> look and add the IRQ checks myself and I'm 90% sure the current code
> can't work with netpoll. So I thought I'd at least document that :(
So looking at it more I realized the way we are getting around the
issue is that the skbuffs are ALWAYS freed in softirq context.
Basically we hand them off to dev_consume_skb_any, which will hand
them off to dev_kfree_skb_irq_reason, and it is queueing them up to be
processed in the net_tx_action handler.
As far as the page pool pages themselves I wonder if we couldn't just
look at modifying __page_pool_put_page() so that it had something
similar to dev_consume_skb_any_reason() so if we are in a hardirq or
IRQs are disabled we just force the page to be freed.
Powered by blists - more mailing lists