[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6cdbe746-2c6f-f698-11d4-9f86d2c4e5cc@suse.cz>
Date: Wed, 25 May 2022 22:54:42 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, patches@...ts.linux.dev,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Alexander Potapenko <glider@...gle.com>
Subject: Re: [GIT PULL] slab for 5.19
+Cc Geert and Alexander
On 5/25/22 20:29, Linus Torvalds wrote:
> On Mon, May 23, 2022 at 2:54 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>>
>> The stackdepot conversion was already attempted last year but
>> reverted by ae14c63a9f20. The memory overhead (while not actually
>> enabled on boot) has been meanwhile solved by making the large
>> stackdepot allocation dynamic.
>
> Why do I still see
>
> +config STACK_HASH_ORDER
> + int "stack depot hash size (12 => 4KB, 20 => 1024KB)"
> + range 12 20
> + default 20
>
> there then?
>
> All that seems to have happened is that it's not a static allocation
> any more, but it's still a big allocation very early at boot by
> default.
>
> The people who complained about this last time were on m68k machines
> iirc, and 1MB there is not insignificant.
My main concern was that configs that enable SLUB_DEBUG (quite common)
shouldn't pay the stackdepot memory overhead if people don't actually
enable the slub object tracking on boot because they are debugging
something. It's possible I misunderstood Geert's point though.
> It's not at all clear to me why that allocation should be that kind of
> fixed number, and if it's a fixed number, why it should be the maximum
> one by default. That seems entirely broken.
As I understand it's a tradeoff between memory overhead due to hash
table size and cpu overhead due to length of collision chains.
> I've pulled this, but considering that it got reverted once, I'm
> really fed up with this kind of thing. This needs to be fixed.
Right, I'll try convert stackdepot to rhashtable. If it turns out
infeasible for some reason, we could at least have an "auto" default
that autosizes the table according to how much memory the system has.
> Because I'm _this_ close to just reverting it again, and saying "No,
> you tried this crap already, didn't learn from the last time, and then
> did the same thing all over again just in a different guise".
>
> Linus
Powered by blists - more mailing lists