[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <146558f5-9f5c-e65f-0177-5f736fe663cd@linux.com>
Date: Thu, 14 Dec 2023 12:14:40 -0800 (PST)
From: "Christoph Lameter (Ampere)" <cl@...ux.com>
To: Matthew Wilcox <willy@...radead.org>
cc: Vlastimil Babka <vbabka@...e.cz>, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Alexander Potapenko <glider@...gle.com>, Marco Elver <elver@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, maple-tree@...ts.infradead.org,
kasan-dev@...glegroups.com
Subject: Re: [PATCH RFC v3 0/9] SLUB percpu array caches and maple tree
nodes
On Wed, 29 Nov 2023, Matthew Wilcox wrote:
>> In order to make the SLUB in page freelists work better you need to have
>> larger freelist and that comes with larger page sizes. I.e. boot with
>> slub_min_order=5 or so to increase performance.
>
> That comes with its own problems, of course.
Well I thought you were solving those with the folios?
>> Also this means increasing TLB pressure. The in page freelists of SLUB cause
>> objects from the same page be served. The SLAB queueing approach
>> results in objects being mixed from any address and thus neighboring objects
>> may require more TLB entries.
>
> Is that still a concern for modern CPUs? We're using 1GB TLB entries
> these days, and there are usually thousands of TLB entries. This feels
> like more of a concern for a 90s era CPU.
ARM kernel memory is mapped by 4K entries by default since rodata=full is
the default. Security concerns screw it up.
Powered by blists - more mailing lists