[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48a24efa-a326-4cca-ab28-50c6251bf03a@suse.cz>
Date: Mon, 20 Oct 2025 12:31:53 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: David Hildenbrand <david@...hat.com>, Jiri Slaby <jirislaby@...nel.org>,
Matthew Wilcox <willy@...radead.org>, Mike Rapoport <rppt@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Brendan Jackman <jackmanb@...gle.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Johannes Weiner <hannes@...xchg.org>, Julia Lawall <Julia.Lawall@...ia.fr>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Michal Hocko
<mhocko@...e.com>, Suren Baghdasaryan <surenb@...gle.com>,
Zi Yan <ziy@...dia.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 0/3] mm: treewide: make get_free_pages() and return void *
On 10/20/25 11:13, David Hildenbrand wrote:
> On 20.10.25 11:08, Jiri Slaby wrote:
>> On 20. 10. 25, 11:02, David Hildenbrand wrote:
>>> Regarding the metadata overhead, in 2015 Linus wrote in that thread:
>>>
>>> "Long ago, allocating a page using kmalloc() was a bad idea, because
>>> there was overhead for it in the allocation and the code.
>>>
>>> These days, kmalloc() not only doesn't have the allocation overhead,
>>> but may actually scale better too, thanks to percpu caches etc."
>>>
>>> What's that status of that 10 years later?
>>
>> AFAI skimmed through the code, for allocations > 2 pages
>> (KMALLOC_MAX_CACHE_SIZE) -- if size is a constant -- slub resorts to
>> alloc_pages().
>>
>> For smaller ones (1 and 2 pages), there is a very little overhead in
>> struct slab -- mm people, please correct me if I am wrong.
>
> If it's really only "struct slab", then there is currently no overhead.
> Once it is decoupled from "struct page", there would be some.
Yes, but there's potentially better scalability and more debugging
possibilities as benefits.
> IIUC, I'm surprised that larger allocations wouldn't currently end up in
> PageSlab() pages.
Can you elaborate why surprised?
Powered by blists - more mailing lists