[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190912002929.78873-3-yuzhao@google.com>
Date: Wed, 11 Sep 2019 18:29:29 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Yu Zhao <yuzhao@...gle.com>
Subject: [PATCH 3/3] mm: lock slub page when listing objects
Though I have no idea what the side effect of a race would be,
apparently we want to prevent the free list from being changed
while debugging objects in general.
Signed-off-by: Yu Zhao <yuzhao@...gle.com>
---
mm/slub.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index f28072c9f2ce..2734a092bbff 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4594,10 +4594,14 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s,
void *addr = page_address(page);
void *p;
+ slab_lock(page);
+
get_map(s, page, map);
for_each_object(p, s, addr, page->objects)
if (!test_bit(slab_index(p, s, addr), map))
add_location(t, s, get_track(s, p, alloc));
+
+ slab_unlock(page);
}
static int list_locations(struct kmem_cache *s, char *buf,
--
2.23.0.162.g0b9fbb3734-goog
Powered by blists - more mailing lists