[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZXS-naeBjoVrGTY9@lore-desk>
Date: Sat, 9 Dec 2023 20:23:09 +0100
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, aleksander.lobakin@...el.com,
netdev@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com,
pabeni@...hat.com, lorenzo.bianconi@...hat.com, bpf@...r.kernel.org,
toke@...hat.com, willemdebruijn.kernel@...il.com,
jasowang@...hat.com, sdf@...gle.com
Subject: Re: [PATCH v3 net-next 2/2] xdp: add multi-buff support for xdp
running in generic mode
> On Wed, 6 Dec 2023 13:41:49 +0100 Jesper Dangaard Brouer wrote:
> > BUT then I realized that PP have a weakness, which is the return/free
> > path that need to take a normal spin_lock, as that can be called from
> > any CPU (unlike the RX/alloc case). Thus, I fear that making multiple
> > devices share a page_pool via softnet_data, increase the chance of lock
> > contention when packets are "freed" returned/recycled.
>
> I was thinking we can add a pcpu CPU ID to page pool so that
> napi_pp_put_page() has a chance to realize that its on the "right CPU"
> and feed the cache directly.
Are we going to use these page_pools just for virtual devices (e.g. veth) or
even for hw NICs? If we do not bound the page_pool to a netdevice I think we
can't rely on it to DMA map/unmap the buffer, right?
Moreover, are we going to rework page_pool stats first? It seems a bit weird to
have a percpu struct with a percpu pointer in it, right?
Regards,
Lorenzo
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists