[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.23.453.2008111435300.3428139@chino.kir.corp.google.com>
Date: Tue, 11 Aug 2020 14:40:14 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Abel Wu <wuyun.wu@...wei.com>
cc: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
hewenliang4@...wei.com, hushiyuan@...wei.com,
"open list:SLAB ALLOCATOR" <linux-mm@...ck.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/slub: fix missing ALLOC_SLOWPATH stat when bulk
alloc
On Tue, 11 Aug 2020, wuyun.wu@...wei.com wrote:
> From: Abel Wu <wuyun.wu@...wei.com>
>
> The ALLOC_SLOWPATH statistics is missing in bulk allocation now.
> Fix it by doing statistics in alloc slow path.
>
> Signed-off-by: Abel Wu <wuyun.wu@...wei.com>
> ---
> mm/slub.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index df93a5a0e9a4..5d89e4064f83 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2600,6 +2600,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> void *freelist;
> struct page *page;
>
> + stat(s, ALLOC_SLOWPATH);
> +
> page = c->page;
> if (!page) {
> /*
> @@ -2788,7 +2790,6 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
> page = c->page;
> if (unlikely(!object || !node_match(page, node))) {
> object = __slab_alloc(s, gfpflags, node, addr, c);
> - stat(s, ALLOC_SLOWPATH);
> } else {
> void *next_object = get_freepointer_safe(s, object);
>
Acked-by: David Rientjes <rientjes@...gle.com>
> --
> 2.28.0.windows.1
Lol :)
Powered by blists - more mailing lists