[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231211090053.21cb357d@kernel.org>
Date: Mon, 11 Dec 2023 09:00:53 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, aleksander.lobakin@...el.com,
netdev@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com,
pabeni@...hat.com, lorenzo.bianconi@...hat.com, bpf@...r.kernel.org,
toke@...hat.com, willemdebruijn.kernel@...il.com, jasowang@...hat.com,
sdf@...gle.com
Subject: Re: [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp
running in generic mode
On Sat, 9 Dec 2023 20:23:09 +0100 Lorenzo Bianconi wrote:
> Are we going to use these page_pools just for virtual devices (e.g. veth) or
> even for hw NICs? If we do not bound the page_pool to a netdevice I think we
> can't rely on it to DMA map/unmap the buffer, right?
Right, I don't think it's particularly useful for HW NICs.
Maybe for allocating skb heads? We could possibly kill
struct page_frag_1k and use PP page / frag instead.
But not sure how Eric would react :)
> Moreover, are we going to rework page_pool stats first? It seems a bit weird to
> have a percpu struct with a percpu pointer in it, right?
The per-CPU stuff is for recycling, IIRC. Even if PP is for a single
CPU we can still end up freeing packets which used its pages anywhere
in the system.
I don't disagree that we may end up with a lot of stats on a large
system, but seems tangential to per-cpu page pools.
Powered by blists - more mailing lists