lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAC_iWjLsXp1eeR9U+VD+wXCxgCXYUrxbcNU-Pc+pqMLHn5wR7A@mail.gmail.com>
Date: Mon, 19 May 2025 08:38:55 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Andrew Lunn <andrew@...n.ch>
Cc: Jakub Kicinski <kuba@...nel.org>, Byungchul Park <byungchul@...com>, willy@...radead.org, 
	almasrymina@...gle.com, kernel_team@...ynix.com, 42.hyeyoo@...il.com, 
	linux-mm@...ck.org, hawk@...nel.org, netdev@...r.kernel.org
Subject: Re: [RFC] shrinking struct page (part of page pool)

Hi Andrew

Apologies for the late reply,

On Sat, 10 May 2025 at 16:53, Andrew Lunn <andrew@...n.ch> wrote:
>
> On Sat, May 10, 2025 at 10:02:59AM +0300, Ilias Apalodimas wrote:
> > Hi Jakub
> >
> > [...]
> >
> > > > >
> > > > >    struct bump {
> > > > >     unsigned long _page_flags;
> > > > >     unsigned long bump_magic;
> > > > >     struct page_pool *bump_pp;
> > > > >     unsigned long _pp_mapping_pad;
> > > > >     unsigned long dma_addr;
> > > > >     atomic_long_t bump_ref_count;
> > > > >     unsigned int _page_type;
> > > > >     atomic_t _refcount;
> > > > >    };
> > > > >
> > > > > To netwrok guys, any thoughts on it?
> > > > > To Willy, do I understand correctly your direction?
> > > > >
> > > > > Plus, it's a quite another issue but I'm curious, that is, what do you
> > > > > guys think about moving the bump allocator(= page pool) code from
> > > > > network to mm?  I'd like to start on the work once gathering opinion
> > > > > from both Willy and network guys.
> > >
> > > I don't see any benefit from moving page pool to MM. It is quite
> > > networking specific. But we can discuss this later. Moving code
> > > is trivial, it should not be the initial focus.
> >
> > Random thoughts here until I look at the patches.
> > The concept of devices doing DMA + recycling the used buffer
> > transcends networking.
>
> Do you know of any other subsystem which takes a page, splits it into
> two, and then uses each half independently for DMA and recycling. A
> typical packet is 1514 octets, so you can get two in a page.

No, but OTOH the recycling is not somehow bound to having multiple
fragments of a page. So I assumed more subsystems would benefit from
not constantly re-allocating and re-mapping pages

Thanks
/Ilias
>
>         Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ