[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <160753912635.3364.4993279338737835663.tip-bot2@tip-bot2>
Date: Wed, 09 Dec 2020 18:38:46 -0000
From: "tip-bot2 for Ahmed S. Darwish" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Ahmed S. Darwish" <a.darwish@...utronix.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: locking/core] seqlock: kernel-doc: Specify when preemption is
automatically altered
The following commit has been merged into the locking/core branch of tip:
Commit-ID: cb262935a166bdef0ccfe6e2adffa00c0f2d038a
Gitweb: https://git.kernel.org/tip/cb262935a166bdef0ccfe6e2adffa00c0f2d038a
Author: Ahmed S. Darwish <a.darwish@...utronix.de>
AuthorDate: Sun, 06 Dec 2020 17:21:43 +01:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Wed, 09 Dec 2020 17:08:49 +01:00
seqlock: kernel-doc: Specify when preemption is automatically altered
The kernel-doc annotations for sequence counters write side functions
are incomplete: they do not specify when preemption is automatically
disabled and re-enabled.
This has confused a number of call-site developers. Fix it.
Signed-off-by: Ahmed S. Darwish <a.darwish@...utronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://lkml.kernel.org/r/CAHk-=wikhGExmprXgaW+MVXG1zsGpztBbVwOb23vetk41EtTBQ@mail.gmail.com
---
include/linux/seqlock.h | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 235cbc6..2f7bb92 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -456,6 +456,8 @@ static inline int do_read_seqcount_retry(const seqcount_t *s, unsigned start)
/**
* raw_write_seqcount_begin() - start a seqcount_t write section w/o lockdep
* @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
+ *
+ * Context: check write_seqcount_begin()
*/
#define raw_write_seqcount_begin(s) \
do { \
@@ -475,6 +477,8 @@ static inline void do_raw_write_seqcount_begin(seqcount_t *s)
/**
* raw_write_seqcount_end() - end a seqcount_t write section w/o lockdep
* @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
+ *
+ * Context: check write_seqcount_end()
*/
#define raw_write_seqcount_end(s) \
do { \
@@ -498,6 +502,7 @@ static inline void do_raw_write_seqcount_end(seqcount_t *s)
* @subclass: lockdep nesting level
*
* See Documentation/locking/lockdep-design.rst
+ * Context: check write_seqcount_begin()
*/
#define write_seqcount_begin_nested(s, subclass) \
do { \
@@ -519,11 +524,10 @@ static inline void do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
* write_seqcount_begin() - start a seqcount_t write side critical section
* @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
*
- * write_seqcount_begin opens a write side critical section of the given
- * seqcount_t.
- *
- * Context: seqcount_t write side critical sections must be serialized and
- * non-preemptible. If readers can be invoked from hardirq or softirq
+ * Context: sequence counter write side sections must be serialized and
+ * non-preemptible. Preemption will be automatically disabled if and
+ * only if the seqcount write serialization lock is associated, and
+ * preemptible. If readers can be invoked from hardirq or softirq
* context, interrupts or bottom halves must be respectively disabled.
*/
#define write_seqcount_begin(s) \
@@ -545,7 +549,8 @@ static inline void do_write_seqcount_begin(seqcount_t *s)
* write_seqcount_end() - end a seqcount_t write side critical section
* @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
*
- * The write section must've been opened with write_seqcount_begin().
+ * Context: Preemption will be automatically re-enabled if and only if
+ * the seqcount write serialization lock is associated, and preemptible.
*/
#define write_seqcount_end(s) \
do { \
Powered by blists - more mailing lists