[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250922221553.47802-3-simon.schippers@tu-dortmund.de>
Date: Tue, 23 Sep 2025 00:15:47 +0200
From: Simon Schippers <simon.schippers@...dortmund.de>
To: willemdebruijn.kernel@...il.com, jasowang@...hat.com, mst@...hat.com,
eperezma@...hat.com, stephen@...workplumber.org, leiyang@...hat.com,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev, kvm@...r.kernel.org
Cc: Simon Schippers <simon.schippers@...dortmund.de>,
Tim Gebauer <tim.gebauer@...dortmund.de>
Subject: [PATCH net-next v5 2/8] Move the decision of invalidation out of __ptr_ring_discard_one
__ptr_ring_will_invalidate is useful if the caller would like to act
before entries of the ptr_ring get invalidated by __ptr_ring_discard_one.
__ptr_ring_consume calls the new method and passes the result to
__ptr_ring_discard_one, preserving the pre-patch logic.
Co-developed-by: Tim Gebauer <tim.gebauer@...dortmund.de>
Signed-off-by: Tim Gebauer <tim.gebauer@...dortmund.de>
Signed-off-by: Simon Schippers <simon.schippers@...dortmund.de>
---
include/linux/ptr_ring.h | 32 ++++++++++++++++++++++----------
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
index c45e95071d7e..78fb3efedc7a 100644
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -266,7 +266,22 @@ static inline bool ptr_ring_empty_bh(struct ptr_ring *r)
}
/* Must only be called after __ptr_ring_peek returned !NULL */
-static inline void __ptr_ring_discard_one(struct ptr_ring *r)
+static inline bool __ptr_ring_will_invalidate(struct ptr_ring *r)
+{
+ /* Once we have processed enough entries invalidate them in
+ * the ring all at once so producer can reuse their space in the ring.
+ * We also do this when we reach end of the ring - not mandatory
+ * but helps keep the implementation simple.
+ */
+ int consumer_head = r->consumer_head + 1;
+
+ return consumer_head - r->consumer_tail >= r->batch ||
+ consumer_head >= r->size;
+}
+
+/* Must only be called after __ptr_ring_peek returned !NULL */
+static inline void __ptr_ring_discard_one(struct ptr_ring *r,
+ bool invalidate)
{
/* Fundamentally, what we want to do is update consumer
* index and zero out the entry so producer can reuse it.
@@ -286,13 +301,7 @@ static inline void __ptr_ring_discard_one(struct ptr_ring *r)
int consumer_head = r->consumer_head;
int head = consumer_head++;
- /* Once we have processed enough entries invalidate them in
- * the ring all at once so producer can reuse their space in the ring.
- * We also do this when we reach end of the ring - not mandatory
- * but helps keep the implementation simple.
- */
- if (unlikely(consumer_head - r->consumer_tail >= r->batch ||
- consumer_head >= r->size)) {
+ if (unlikely(invalidate)) {
/* Zero out entries in the reverse order: this way we touch the
* cache line that producer might currently be reading the last;
* producer won't make progress and touch other cache lines
@@ -312,6 +321,7 @@ static inline void __ptr_ring_discard_one(struct ptr_ring *r)
static inline void *__ptr_ring_consume(struct ptr_ring *r)
{
+ bool invalidate;
void *ptr;
/* The READ_ONCE in __ptr_ring_peek guarantees that anyone
@@ -319,8 +329,10 @@ static inline void *__ptr_ring_consume(struct ptr_ring *r)
* with smp_wmb in __ptr_ring_produce.
*/
ptr = __ptr_ring_peek(r);
- if (ptr)
- __ptr_ring_discard_one(r);
+ if (ptr) {
+ invalidate = __ptr_ring_will_invalidate(r);
+ __ptr_ring_discard_one(r, invalidate);
+ }
return ptr;
}
--
2.43.0
Powered by blists - more mailing lists