[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200916140532.GA1362448@hirez.programming.kicks-ass.net>
Date: Wed, 16 Sep 2020 16:05:32 +0200
From: peterz@...radead.org
To: Jiri Olsa <jolsa@...hat.com>
Cc: Namhyung Kim <namhyung@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
lkml <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Wade Mealing <wmealing@...hat.com>
Subject: Re: [PATCHv2] perf: Fix race in perf_mmap_close function
On Wed, Sep 16, 2020 at 01:53:11PM +0200, Jiri Olsa wrote:
> There's a possible race in perf_mmap_close when checking ring buffer's
> mmap_count refcount value. The problem is that the mmap_count check is
> not atomic because we call atomic_dec and atomic_read separately.
>
> perf_mmap_close:
> ...
> atomic_dec(&rb->mmap_count);
> ...
> if (atomic_read(&rb->mmap_count))
> goto out_put;
>
> <ring buffer detach>
> free_uid
>
> out_put:
> ring_buffer_put(rb); /* could be last */
>
> The race can happen when we have two (or more) events sharing same ring
> buffer and they go through atomic_dec and then they both see 0 as refcount
> value later in atomic_read. Then both will go on and execute code which
> is meant to be run just once.
The trival case should be protected by mmap_sem, we call vm_ops->close()
with mmap_sem held for writing IIRC. But yes, I think it's possible to
construct a failure here.
Powered by blists - more mailing lists