[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.2002261827440.8012@www.lameter.com>
Date: Wed, 26 Feb 2020 18:31:28 +0000 (UTC)
From: Christopher Lameter <cl@...ux.com>
To: Roman Gushchin <guro@...com>
cc: Wen Yang <wenyang@...ux.alibaba.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Xunlei Pang <xlpang@...ux.alibaba.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/slub: improve count_partial() for
CONFIG_SLUB_CPU_PARTIAL
On Mon, 24 Feb 2020, Roman Gushchin wrote:
> > I suggest that you simply use the number of partial slabs and multiply
> > them by the number of objects in a slab and use that as a value. Both
> > values are readily available via /sys/kernel/slab/<...>/
>
> So maybe something like this?
>
> @@ -5907,7 +5907,9 @@ void get_slabinfo(struct kmem_cache *s, struct slabinfo *sinfo)
> for_each_kmem_cache_node(s, node, n) {
> nr_slabs += node_nr_slabs(n);
> nr_objs += node_nr_objs(n);
> +#ifndef CONFIG_SLUB_CPU_PARTIAL
> nr_free += count_partial(n, count_free);
> +#endif
> }
Why would not having cpu partials screws up the counting of objects in
partial slabs?
You dont need kernel mods for this. The numbers are exposed already in
/sys/kernel/slab/xxx.
Powered by blists - more mailing lists