[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNOyE=ZxyMEyEf6i7TX-jEvhiJN5ASFY0FTWRF3azDAB-Q@mail.gmail.com>
Date: Tue, 5 Nov 2024 10:50:50 +0100
From: Marco Elver <elver@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>, "Paul E. McKenney" <paulmck@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>, Mark Rutland <mark.rutland@....com>,
Dmitry Vyukov <dvyukov@...gle.com>, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org, Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v2 5/5] kcsan, seqlock: Fix incorrect assumption in read_seqbegin()
On Tue, 5 Nov 2024 at 10:34, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Mon, Nov 04, 2024 at 04:43:09PM +0100, Marco Elver wrote:
> > During testing of the preceding changes, I noticed that in some cases,
> > current->kcsan_ctx.in_flat_atomic remained true until task exit. This is
> > obviously wrong, because _all_ accesses for the given task will be
> > treated as atomic, resulting in false negatives i.e. missed data races.
> >
> > Debugging led to fs/dcache.c, where we can see this usage of seqlock:
> >
> > struct dentry *d_lookup(const struct dentry *parent, const struct qstr *name)
> > {
> > struct dentry *dentry;
> > unsigned seq;
> >
> > do {
> > seq = read_seqbegin(&rename_lock);
> > dentry = __d_lookup(parent, name);
> > if (dentry)
> > break;
> > } while (read_seqretry(&rename_lock, seq));
> > [...]
>
>
> How's something like this completely untested hack?
>
>
> struct dentry *dentry;
>
> read_seqcount_scope (&rename_lock) {
> dentry = __d_lookup(parent, name);
> if (dentry)
> break;
> }
>
>
> But perhaps naming isn't right, s/_scope/_loop/ ?
_loop seems straightforward.
> --- a/include/linux/seqlock.h
> +++ b/include/linux/seqlock.h
> @@ -829,6 +829,33 @@ static inline unsigned read_seqretry(con
> return read_seqcount_retry(&sl->seqcount, start);
> }
>
> +
> +static inline unsigned read_seq_scope_begin(const struct seqlock_t *sl)
> +{
> + unsigned ret = read_seqcount_begin(&sl->seqcount);
> + kcsan_atomic_next(0);
> + kcsan_flat_atomic_begin();
> + return ret;
> +}
> +
> +static inline void read_seq_scope_end(unsigned *seq)
> +{
> + kcsan_flat_atomic_end();
If we are guaranteed to always have one _begin paired by a matching
_end, we can s/kcsan_flat_atomic/kcsan_nestable_atomic/ for these.
> +}
> +
> +static inline bool read_seq_scope_retry(const struct seqlock_t *sl, unsigned *seq)
> +{
> + bool done = !read_seqcount_retry(&sl->seqcount, *seq);
> + if (!done)
> + *seq = read_seqcount_begin(&sl->seqcount);
> + return done;
> +}
> +
> +#define read_seqcount_scope(sl) \
> + for (unsigned seq __cleanup(read_seq_scope_end) = \
> + read_seq_scope_begin(sl), done = 0; \
> + !done; done = read_seq_scope_retry(sl, &seq))
> +
That's nice! I think before we fully moved over to C11, I recall Mark
and I discussed something like that (on IRC?) but gave up until C11
landed and then we forgot. ;-)
Powered by blists - more mailing lists