[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230701170155.6f72e4b8@kernel.org>
Date: Sat, 1 Jul 2023 17:01:55 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Larysa Zaremba <larysa.zaremba@...el.com>,
Yunsheng Lin <linyunsheng@...wei.com>,
Alexander Duyck <alexanderduyck@...com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC net-next 0/4] net: page_pool: a couple assorted
optimizations
On Thu, 29 Jun 2023 17:23:01 +0200 Alexander Lobakin wrote:
> #3: new, prereq to #4. Add NAPI state flag, which would indicate
> napi->poll() is running right now, so that napi->list_owner would
> point to the CPU where it's being run, not just scheduled;
> #4: new. In addition to recycling skb PP pages directly when @napi_safe
> is set, check for the flag from #3, which will mean the same if
> ->list_owner is pointing to us. This allows to use direct recycling
> anytime we're inside a NAPI polling loop or GRO stuff going right
> after it, covering way more cases than is right now.
You know NAPI pretty well so I'm worried I'm missing something.
I don't think the new flag adds any value. NAPI does not have to
be running, you can drop patch 3 and use in_softirq() instead of
the new flag, AFAIU.
The reason I did not do that is that I wasn't sure if there is no
weird (netcons?) case where skb gets freed from an IRQ :(
Powered by blists - more mailing lists