[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <867f6da4-6d38-6435-3fbb-a2a3744029f1@huawei.com>
Date: Sat, 30 Oct 2021 18:11:52 +0800
From: Yunfeng Ye <yeyunfeng@...wei.com>
To: <cl@...ux.com>, <penberg@...nel.org>, <rientjes@...gle.com>,
<iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>, <vbabka@...e.cz>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
CC: <wuxu.wu@...wei.com>, Hewenliang <hewenliang4@...wei.com>
Subject: [PATCH] mm, slub: place the trace before freeing memory in
kmem_cache_free()
After the memory is freed, it may be allocated by other CPUs and has
been recorded by trace. So the timing sequence of the memory tracing is
inaccurate.
For example, we expect the following timing sequeuce:
CPU 0 CPU 1
(1) alloc xxxxxx
(2) free xxxxxx
(3) alloc xxxxxx
(4) free xxxxxx
However, the following timing sequence may occur:
CPU 0 CPU 1
(1) alloc xxxxxx
(2) alloc xxxxxx
(3) free xxxxxx
(4) free xxxxxx
So place the trace before freeing memory in kmem_cache_free().
Signed-off-by: Yunfeng Ye <yeyunfeng@...wei.com>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 432145d7b4ec..427e62034c3f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
s = cache_from_obj(s, x);
if (!s)
return;
- slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
trace_kmem_cache_free(_RET_IP_, x, s->name);
+ slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
}
EXPORT_SYMBOL(kmem_cache_free);
--
2.27.0
Powered by blists - more mailing lists