[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241105091342.GA9767@noisy.programming.kicks-ass.net>
Date: Tue, 5 Nov 2024 10:13:42 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Marco Elver <elver@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Mark Rutland <mark.rutland@....com>,
Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 5/5] kcsan, seqlock: Fix incorrect assumption in
read_seqbegin()
On Mon, Nov 04, 2024 at 04:43:09PM +0100, Marco Elver wrote:
> During testing of the preceding changes, I noticed that in some cases,
> current->kcsan_ctx.in_flat_atomic remained true until task exit. This is
> obviously wrong, because _all_ accesses for the given task will be
> treated as atomic, resulting in false negatives i.e. missed data races.
>
> Debugging led to fs/dcache.c, where we can see this usage of seqlock:
>
> struct dentry *d_lookup(const struct dentry *parent, const struct qstr *name)
> {
> struct dentry *dentry;
> unsigned seq;
>
> do {
> seq = read_seqbegin(&rename_lock);
> dentry = __d_lookup(parent, name);
> if (dentry)
> break;
> } while (read_seqretry(&rename_lock, seq));
> [...]
>
> As can be seen, read_seqretry() is never called if dentry != NULL;
> consequently, current->kcsan_ctx.in_flat_atomic will never be reset to
> false by read_seqretry().
>
> Give up on the wrong assumption of "assume closing read_seqretry()", and
> rely on the already-present annotations in read_seqcount_begin/retry().
>
> Fixes: 88ecd153be95 ("seqlock, kcsan: Add annotations for KCSAN")
> Signed-off-by: Marco Elver <elver@...gle.com>
> ---
> v2:
> * New patch.
> ---
> include/linux/seqlock.h | 12 +-----------
> 1 file changed, 1 insertion(+), 11 deletions(-)
>
> diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
> index 45eee0e5dca0..5298765d6ca4 100644
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -810,11 +810,7 @@ static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
> */
> static inline unsigned read_seqbegin(const seqlock_t *sl)
> {
> - unsigned ret = read_seqcount_begin(&sl->seqcount);
> -
> - kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry() */
> - kcsan_flat_atomic_begin();
> - return ret;
> + return read_seqcount_begin(&sl->seqcount);
> }
>
> /**
> @@ -830,12 +826,6 @@ static inline unsigned read_seqbegin(const seqlock_t *sl)
> */
> static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
> {
> - /*
> - * Assume not nested: read_seqretry() may be called multiple times when
> - * completing read critical section.
> - */
> - kcsan_flat_atomic_end();
> -
> return read_seqcount_retry(&sl->seqcount, start);
> }
OK, so this takes us back to kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX)
and kcsan_atomic_next(0).
Which I suppose is safe, except it doesn't nest properly.
Anyway, these all look really nice, let me go queue them up.
Thanks!
Powered by blists - more mailing lists