[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <09e66344-4d30-9a67-24b8-14a910709157@suse.cz>
Date: Wed, 6 May 2020 13:56:08 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <guro@...com>,
Wen Yang <wenyang@...ux.alibaba.com>
Subject: Re: [PATCH] slub: limit count of partial slabs scanned to gather
statistics
On 5/4/20 6:07 PM, Konstantin Khlebnikov wrote:
> To get exact count of free and used objects slub have to scan list of
> partial slabs. This may take at long time. Scanning holds spinlock and
> blocks allocations which move partial slabs to per-cpu lists and back.
>
> Example found in the wild:
>
> # cat /sys/kernel/slab/dentry/partial
> 14478538 N0=7329569 N1=7148969
> # time cat /sys/kernel/slab/dentry/objects
> 286225471 N0=136967768 N1=149257703
>
> real 0m1.722s
> user 0m0.001s
> sys 0m1.721s
>
> The same problem in slab was addressed in commit f728b0a5d72a ("mm, slab:
> faster active and free stats") by adding more kmem cache statistics.
> For slub same approach requires atomic op on fast path when object frees.
In general yeah, but are you sure about this one? AFAICS this is about pages in
the n->partial list, where manipulations happen under n->list_lock and shouldn't
be fast path. It should be feasible to add a counter under the same lock, so it
wouldn't even need to be atomic?
> Let's simply limit count of scanned slabs and print warning.
> Limit set in /sys/module/slub/parameters/max_partial_to_count.
> Default is 10000 which should be enough for most sane cases.
>
> Return linear approximation if list of partials is longer than limit.
> Nobody should notice difference.
>
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
BTW there was a different patch in that area proposed recently [1] for slabinfo.
Christopher argued that we can do that for slabinfo but leave /sys stats
precise. Guess not then?
[1]
https://lore.kernel.org/linux-mm/20200222092428.99488-1-wenyang@linux.alibaba.com/
> ---
> mm/slub.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 9bf44955c4f1..86a366f7acb6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2407,16 +2407,29 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n)
> #endif /* CONFIG_SLUB_DEBUG */
>
> #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS)
> +
> +static unsigned long max_partial_to_count __read_mostly = 10000;
> +module_param(max_partial_to_count, ulong, 0644);
> +
> static unsigned long count_partial(struct kmem_cache_node *n,
> int (*get_count)(struct page *))
> {
> + unsigned long counted = 0;
> unsigned long flags;
> unsigned long x = 0;
> struct page *page;
>
> spin_lock_irqsave(&n->list_lock, flags);
> - list_for_each_entry(page, &n->partial, slab_list)
> + list_for_each_entry(page, &n->partial, slab_list) {
> x += get_count(page);
> +
> + if (++counted > max_partial_to_count) {
> + pr_warn_once("SLUB: too much partial slabs to count all objects, increase max_partial_to_count.\n");
> + /* Approximate total count of objects */
> + x = mult_frac(x, n->nr_partial, counted);
> + break;
> + }
> + }
> spin_unlock_irqrestore(&n->list_lock, flags);
> return x;
> }
>
>
Powered by blists - more mailing lists