[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aDiBUr38QArXjO6v@google.com>
Date: Thu, 29 May 2025 15:46:26 +0000
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Christoph Lameter <cl@...ux.com>, David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Harry Yoo <harry.yoo@...cle.com>,
Matthew Wilcox <willy@...radead.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm, slab: use frozen pages for large kmalloc
On Thu, May 29, 2025 at 10:56:26AM +0200, Vlastimil Babka wrote:
> Since slab pages are now frozen, it makes sense to have large kmalloc()
> objects behave same as small kmalloc(), as the choice between the two is
> an implementation detail depending on allocation size.
>
> Notably, increasing refcount on a slab page containing kmalloc() object
> is not possible anymore, so it should be consistent for large kmalloc
> pages.
>
> Therefore, change large kmalloc to use the frozen pages API.
>
> Because of some unexpected fallout in the slab pages case (see commit
> b9c0e49abfca ("mm: decline to manipulate the refcount on a slab page"),
> implement the same kind of checks and warnings as part of this change.
>
> Notably, networking code using sendpage_ok() to determine whether the
> page refcount can be manipulated in the network stack should continue
> behaving correctly. Before this change, the function returns true for
> large kmalloc pages and page refcount can be manipulated. After this
> change, the function will return false.
>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
Acked-by: Roman Gushchin <roman.gushchin@...ux.dev>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index bf55206935c467f7508e863332063bb15f904a24..d3eb6adf9fa949fbd611470182a03c743b16aac7 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1549,6 +1549,8 @@ static inline void get_page(struct page *page)
> struct folio *folio = page_folio(page);
> if (WARN_ON_ONCE(folio_test_slab(folio)))
> return;
> + if (WARN_ON_ONCE(folio_test_large_kmalloc(folio)))
> + return;
> folio_get(folio);
I guess eventually we can convert them to VM_WARN_ON_ONCE()?
Powered by blists - more mailing lists