[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241029114937.GT14555@noisy.programming.kicks-ass.net>
Date: Tue, 29 Oct 2024 12:49:37 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Marco Elver <elver@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Mark Rutland <mark.rutland@....com>,
Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org,
Alexander Potapenko <glider@...gle.com>
Subject: Re: [PATCH] kcsan, seqlock: Support seqcount_latch_t
On Tue, Oct 29, 2024 at 09:36:29AM +0100, Marco Elver wrote:
> Reviewing current raw_write_seqcount_latch() callers, the most common
> patterns involve only few memory accesses, either a single plain C
> assignment, or memcpy;
Then I assume you've encountered latch_tree_{insert,erase}() in your
travels, right?
Also, I note that update_clock_read_data() seems to do things
'backwards' and will completely elide your proposed annotation.
> therefore, the value of 8 memory accesses after
> raw_write_seqcount_latch() is chosen to (a) avoid most false positives,
> and (b) avoid excessive number of false negatives (due to inadvertently
> declaring most accesses in the proximity of update_fast_timekeeper() as
> "atomic").
The above latch'ed RB-trees can certainly exceed this magical number 8.
> Reported-by: Alexander Potapenko <glider@...gle.com>
> Tested-by: Alexander Potapenko <glider@...gle.com>
> Fixes: 88ecd153be95 ("seqlock, kcsan: Add annotations for KCSAN")
> Signed-off-by: Marco Elver <elver@...gle.com>
> ---
> include/linux/seqlock.h | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> index fffeb754880f..e24cf144276e 100644
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -614,6 +614,7 @@ typedef struct {
> */
> static __always_inline unsigned raw_read_seqcount_latch(const seqcount_latch_t *s)
> {
> + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);
> /*
> * Pairs with the first smp_wmb() in raw_write_seqcount_latch().
> * Due to the dependent load, a full smp_rmb() is not needed.
> @@ -631,6 +632,7 @@ static __always_inline unsigned raw_read_seqcount_latch(const seqcount_latch_t *
> static __always_inline int
> raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
> {
> + kcsan_atomic_next(0);
> smp_rmb();
> return unlikely(READ_ONCE(s->seqcount.sequence) != start);
> }
> @@ -721,6 +723,13 @@ static inline void raw_write_seqcount_latch(seqcount_latch_t *s)
> smp_wmb(); /* prior stores before incrementing "sequence" */
> s->seqcount.sequence++;
> smp_wmb(); /* increment "sequence" before following stores */
> +
> + /*
> + * Latch writers do not have a well-defined critical section, but to
> + * avoid most false positives, at the cost of false negatives, assume
> + * the next few memory accesses belong to the latch writer.
> + */
> + kcsan_atomic_next(8);
> }
Given there are so very few latch users, would it make sense to
introduce a raw_write_seqcount_latch_end() callback that does
kcsan_atomic_next(0) ? -- or something along those lines? Then you won't
have to assume such a small number.
Powered by blists - more mailing lists