[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e0222219-eb2d-5e1e-81e1-548eeb5f73e0@linux.com>
Date: Thu, 11 Apr 2024 10:02:25 -0700 (PDT)
From: "Christoph Lameter (Ampere)" <cl@...ux.com>
To: Jianfeng Wang <jianfeng.w.wang@...cle.com>
cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, penberg@...nel.org,
rientjes@...gle.com, iamjoonsoo.kim@....com, akpm@...ux-foundation.org,
vbabka@...e.cz, junxiao.bi@...cle.com
Subject: Re: [PATCH] slub: limit number of slabs to scan in count_partial()
On Thu, 11 Apr 2024, Jianfeng Wang wrote:
> So, the fix is to limit the number of slabs to scan in
> count_partial(), and output an approximated result if the list is too
> long. Default to 10000 which should be enough for most sane cases.
That is a creative approach. The problem though is that objects on the
partial lists are kind of sorted. The partial slabs with only a few
objects available are at the start of the list so that allocations cause
them to be removed from the partial list fast. Full slabs do not need to
be tracked on any list.
The partial slabs with few objects are put at the end of the partial list
in the hope that the few objects remaining will also be freed which would
allow the freeing of the slab folio.
So the object density may be higher at the beginning of the list.
kmem_cache_shrink() will explicitly sort the partial lists to put the
partial pages in that order.
Can you run some tests showing the difference between the estimation and
the real count?
Powered by blists - more mailing lists