[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221019224659.2499511-5-paulmck@kernel.org>
Date: Wed, 19 Oct 2022 15:46:56 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: rcu@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, kernel-team@...com,
rostedt@...dmis.org, "Paul E. McKenney" <paulmck@...nel.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org
Subject: [PATCH rcu 5/8] slab: Explain why SLAB_DESTROY_BY_RCU reference before locking
It is not obvious to the casual user why it is absolutely necessary to
acquire a reference to a SLAB_DESTROY_BY_RCU structure before acquiring
a lock in that structure. Therefore, add a comment explaining this point.
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc: <linux-mm@...ck.org>
---
include/linux/slab.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 90877fcde70bd..446303e385265 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -76,6 +76,12 @@
* rcu_read_lock before reading the address, then rcu_read_unlock after
* taking the spinlock within the structure expected at that address.
*
+ * Note that it is not possible to acquire a lock within a structure
+ * allocated with SLAB_DESTROY_BY_RCU without first acquiring a reference
+ * as described above. The reason is that SLAB_DESTROY_BY_RCU pages are
+ * not zeroed before being given to the slab, which means that any locks
+ * must be initialized after each and every kmem_struct_alloc().
+ *
* Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU.
*/
/* Defer freeing slabs to RCU */
--
2.31.1.189.g2e36527f23
Powered by blists - more mailing lists