[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211008133602.4963-1-42.hyeyoo@gmail.com>
Date: Fri, 8 Oct 2021 13:36:02 +0000
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Hyeonggon Yoo <42.hyeyoo@...il.com>
Subject: [PATCH] mm, slub: Use prefetchw instead of prefetch
It's certain that an object will be not only read, but also
written after allocation.
Use prefetchw instead of prefetchw. On supported architecture
like x86, it helps to invalidate cache line when the object exists
in other processors' cache.
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
---
mm/slub.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 3d2025f7163b..2aca7523165e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -352,9 +352,9 @@ static inline void *get_freepointer(struct kmem_cache *s, void *object)
return freelist_dereference(s, object + s->offset);
}
-static void prefetch_freepointer(const struct kmem_cache *s, void *object)
+static void prefetchw_freepointer(const struct kmem_cache *s, void *object)
{
- prefetch(object + s->offset);
+ prefetchw(object + s->offset);
}
static inline void *get_freepointer_safe(struct kmem_cache *s, void *object)
@@ -3195,10 +3195,9 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
note_cmpxchg_failure("slab_alloc", s, tid);
goto redo;
}
- prefetch_freepointer(s, next_object);
+ prefetchw_freepointer(s, next_object);
stat(s, ALLOC_FASTPATH);
}
-
maybe_wipe_obj_freeptr(s, object);
init = slab_want_init_on_alloc(gfpflags, s);
--
2.27.0
Powered by blists - more mailing lists