[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YNlcgryyawTxPz//@gmail.com>
Date: Mon, 28 Jun 2021 07:22:10 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Christian Brauner <christian.brauner@...ntu.com>,
Oleg Nesterov <oleg@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [GIT PULL] sigqueue cache fix
* Ingo Molnar <mingo@...nel.org> wrote:
> - Producer <-> consumer: this is the most interesting race, and I think
> it's unsafe in theory, because the producer doesn't make sure that any
> previous writes to the actual queue entry (struct sigqueue *q) have
> reached storage before the new 'free' entry is advertised to consumers.
>
> So in principle CPU#0 could see a new sigqueue entry and use it, before
> it's fully freed.
>
> In *practice* it's probably safe by accident (or by undocumented
> intent), because there's an atomic op we have shortly before putting the
> queue entry into the sigqueue_cache, in __sigqueue_free():
>
> if (atomic_dec_and_test(&q->user->sigpending))
> free_uid(q->user);
>
> And atomic_dec_and_test() implies a full barrier - although I haven't
> found the place where we document it and
> Documentation/memory-ordering.txt is silent on it. We should probably
> fix that too.
>
> At minimum the patch adding the ->sigqueue_cache should include a
> well-documented race analysis firmly documenting the implicit barrier after
> the atomic_dec_and_test().
I just realized that even with that implicit full barrier it's not safe:
the producer uses q->user after the atomic_dec_and_test(). That access is
not serialized with the later write to ->sigqueue_cache - and another CPU
might see that entry and use the ->sigqueue_cache and corrupt q->user ...
So I think this code might have a real race on LL/SC platforms and we'll
need an smp_mb() in sigqueue_cache_or_free()?
Thanks,
Ingo
Powered by blists - more mailing lists