[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200811022427.1363-1-wuyun.wu@huawei.com>
Date: Tue, 11 Aug 2020 10:24:24 +0800
From: <wuyun.wu@...wei.com>
To: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
"David Rientjes" <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
"Andrew Morton" <akpm@...ux-foundation.org>
CC: <hewenliang4@...wei.com>, <hushiyuan@...wei.com>,
Abel Wu <wuyun.wu@...wei.com>,
"open list:SLAB ALLOCATOR" <linux-mm@...ck.org>,
"open list" <linux-kernel@...r.kernel.org>
Subject: [PATCH] mm/slub: fix missing ALLOC_SLOWPATH stat when bulk alloc
From: Abel Wu <wuyun.wu@...wei.com>
The ALLOC_SLOWPATH statistics is missing in bulk allocation now.
Fix it by doing statistics in alloc slow path.
Signed-off-by: Abel Wu <wuyun.wu@...wei.com>
---
mm/slub.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index df93a5a0e9a4..5d89e4064f83 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2600,6 +2600,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
void *freelist;
struct page *page;
+ stat(s, ALLOC_SLOWPATH);
+
page = c->page;
if (!page) {
/*
@@ -2788,7 +2790,6 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
page = c->page;
if (unlikely(!object || !node_match(page, node))) {
object = __slab_alloc(s, gfpflags, node, addr, c);
- stat(s, ALLOC_SLOWPATH);
} else {
void *next_object = get_freepointer_safe(s, object);
--
2.28.0.windows.1
Powered by blists - more mailing lists