[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCK6J2YtA7vi1Kjz@casper.infradead.org>
Date: Tue, 13 May 2025 04:19:03 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Byungchul Park <byungchul@...com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kernel_team@...ynix.com, kuba@...nel.org,
almasrymina@...gle.com, ilias.apalodimas@...aro.org,
harry.yoo@...cle.com, hawk@...nel.org, akpm@...ux-foundation.org,
ast@...nel.org, daniel@...earbox.net, davem@...emloft.net,
john.fastabend@...il.com, andrew+netdev@...n.ch,
edumazet@...gle.com, pabeni@...hat.com, vishal.moola@...il.com
Subject: Re: [RFC 19/19] mm, netmem: remove the page pool members in struct
page
On Tue, May 13, 2025 at 10:42:00AM +0900, Byungchul Park wrote:
> Just in case, lemme explain what I meant, for *example*:
I understood what you meant.
> In here, operating on struct netmem_desc can smash _mapcount and
> _refcount in struct page unexpectedly, even though sizeof(struct
> netmem_desc) <= sizeof(struct page). That's why I think the place holder
> is necessary until it completely gets separated so as to have its own
> instance.
We could tighten up the assert a bit. eg
static_assert(sizeof(struct netmem_desc) <= offsetof(struct page, _refcount));
We _can't_ shrink struct page until struct folio is dynamically
allocated. The same patch series that dynamically allocates folio will
do the same for netmem and slab and ptdesc and ...
Powered by blists - more mailing lists