[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230629152305.905962-1-aleksander.lobakin@intel.com>
Date: Thu, 29 Jun 2023 17:23:01 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: Alexander Lobakin <aleksander.lobakin@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Larysa Zaremba <larysa.zaremba@...el.com>,
Yunsheng Lin <linyunsheng@...wei.com>,
Alexander Duyck <alexanderduyck@...com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH RFC net-next 0/4] net: page_pool: a couple assorted optimizations
Here's spin off the IAVF PP series[0], with 2 runtime (hotpath) and 1
compile-time optimizations. They're based and tested on top of the
hybrid PP allocation series[1], but don't require it to work and
in general independent of it and each other.
Per-patch breakdown:
#1: already was on the lists, but this time it's done the other way, the
one that Alex Duyck proposed during the review of the previous series.
Slightly reduce amount of C preprocessing by stopping including
<net/page_pool.h> to <linux/skbuff.h> (which is included in the
half of the kernel sources). Especially useful with the abovementioned
series applied, as it makes page_pool.h heavier;
#2: don't call to DMA sync externals when they won't do anything anyway
by doing some heuristics a bit earlier (when allocating a new page),
also was on the lists;
#3: new, prereq to #4. Add NAPI state flag, which would indicate
napi->poll() is running right now, so that napi->list_owner would
point to the CPU where it's being run, not just scheduled;
#4: new. In addition to recycling skb PP pages directly when @napi_safe
is set, check for the flag from #3, which will mean the same if
->list_owner is pointing to us. This allows to use direct recycling
anytime we're inside a NAPI polling loop or GRO stuff going right
after it, covering way more cases than is right now.
(complete tree with [1] + this + [0] is available here: [2])
[0] https://lore.kernel.org/netdev/20230530150035.1943669-1-aleksander.lobakin@intel.com
[1] https://lore.kernel.org/netdev/20230629120226.14854-1-linyunsheng@huawei.com
[2] https://github.com/alobakin/linux/commits/iavf-pp-frag
Alexander Lobakin (4):
net: skbuff: don't include <net/page_pool.h> to <linux/skbuff.h>
net: page_pool: avoid calling no-op externals when possible
net: add flag to indicate NAPI/GRO is running right now
net: skbuff: always recycle PP pages directly when inside a NAPI loop
drivers/net/ethernet/engleder/tsnep_main.c | 1 +
drivers/net/ethernet/freescale/fec_main.c | 1 +
.../marvell/octeontx2/nic/otx2_common.c | 1 +
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 1 +
.../ethernet/mellanox/mlx5/core/en/params.c | 1 +
.../net/ethernet/mellanox/mlx5/core/en/xdp.c | 1 +
drivers/net/wireless/mediatek/mt76/mt76.h | 1 +
include/linux/netdevice.h | 2 +
include/linux/skbuff.h | 3 +-
include/net/page_pool.h | 5 +-
net/core/dev.c | 23 +++++--
net/core/page_pool.c | 62 +++++++------------
net/core/skbuff.c | 29 +++++++++
13 files changed, 83 insertions(+), 48 deletions(-)
---
Really curious about #3. Implementing the idea correctly (this or other
way) potentially unblocks a lot more interesting stuff (besides #4).
--
2.41.0
Powered by blists - more mailing lists