[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c744c40b-2b38-4911-977d-61786de73791@lunn.ch>
Date: Sat, 10 May 2025 15:53:47 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc: Jakub Kicinski <kuba@...nel.org>, Byungchul Park <byungchul@...com>,
willy@...radead.org, almasrymina@...gle.com,
kernel_team@...ynix.com, 42.hyeyoo@...il.com, linux-mm@...ck.org,
hawk@...nel.org, netdev@...r.kernel.org
Subject: Re: [RFC] shrinking struct page (part of page pool)
On Sat, May 10, 2025 at 10:02:59AM +0300, Ilias Apalodimas wrote:
> Hi Jakub
>
> [...]
>
> > > >
> > > > struct bump {
> > > > unsigned long _page_flags;
> > > > unsigned long bump_magic;
> > > > struct page_pool *bump_pp;
> > > > unsigned long _pp_mapping_pad;
> > > > unsigned long dma_addr;
> > > > atomic_long_t bump_ref_count;
> > > > unsigned int _page_type;
> > > > atomic_t _refcount;
> > > > };
> > > >
> > > > To netwrok guys, any thoughts on it?
> > > > To Willy, do I understand correctly your direction?
> > > >
> > > > Plus, it's a quite another issue but I'm curious, that is, what do you
> > > > guys think about moving the bump allocator(= page pool) code from
> > > > network to mm? I'd like to start on the work once gathering opinion
> > > > from both Willy and network guys.
> >
> > I don't see any benefit from moving page pool to MM. It is quite
> > networking specific. But we can discuss this later. Moving code
> > is trivial, it should not be the initial focus.
>
> Random thoughts here until I look at the patches.
> The concept of devices doing DMA + recycling the used buffer
> transcends networking.
Do you know of any other subsystem which takes a page, splits it into
two, and then uses each half independently for DMA and recycling. A
typical packet is 1514 octets, so you can get two in a page.
Andrew
Powered by blists - more mailing lists