[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bb1d4c3e-0c1a-0868-5c1f-9a1de8692db1@linux.com>
Date: Mon, 15 Apr 2024 09:20:29 -0700 (PDT)
From: "Christoph Lameter (Ampere)" <cl@...ux.com>
To: Jianfeng Wang <jianfeng.w.wang@...cle.com>
cc: Vlastimil Babka <vbabka@...e.cz>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"penberg@...nel.org" <penberg@...nel.org>,
"rientjes@...gle.com" <rientjes@...gle.com>,
"iamjoonsoo.kim@....com" <iamjoonsoo.kim@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
Junxiao Bi <junxiao.bi@...cle.com>
Subject: Re: [PATCH] slub: limit number of slabs to scan in count_partial()
On Sat, 13 Apr 2024, Jianfeng Wang wrote:
>>>>>> kmem_cache_shrink() will explicitly sort the partial lists to put the
>>>>>> partial pages in that order.
>>>>>>
>
> Realized that I’d do "echo 1 > /sys/kernel/slab/dentry/shrink” to sort the list explicitly.
> After that, the numbers become:
> N = 10000 -> diff = 7.1 %
> N = 20000 -> diff = 5.7 %
> N = 25000 -> diff = 5.4 %
> So, expecting ~5-7% difference after shrinking.
That still looks ok to me.
Powered by blists - more mailing lists