[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180511105931.yyarmtz2gjkbuq2a@lakrids.cambridge.arm.com>
Date: Fri, 11 May 2018 11:59:32 +0100
From: Mark Rutland <mark.rutland@....com>
To: linux-kernel@...r.kernel.org
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will.deacon@....com>
Subject: Re: [PATCH] perf/ring_buffer: ensure atomicity and order of updates
On Thu, May 10, 2018 at 02:06:32PM +0100, Mark Rutland wrote:
> - smp_wmb(); /* B, matches C */
> - rb->user_page->data_head = head;
> + smp_store_release(&rb->user_page->data_head, head); /* B, matches C */
> - rb->user_page->aux_head = rb->aux_head;
> + smp_store_release(&rb->user_page->aux_head, rb->aux_head);
> - rb->user_page->aux_head = rb->aux_head;
> + smp_store_release(&rb->user_page->aux_head, rb->aux_head);
The kbuild test robot has helpfully discovered another latent bug here.
We assume we can make single-copy-atomic accesses to
{aux,data}_{head,tail}, but this isn't necessarily true on 32-bit
architectures, and smp_store_release() rightly complains at build time.
READ_ONCE() and WRITE_ONCE() "helpfully" make a silent fallback to a
memcpy in this case, so we're broken today, regardless of this change.
I suspect that in practice we get single-copy-atomicity for the 32-bit
halves, and sessions likely produce less than 4GiB of ringbuffer data,
so failures would be rare.
I'm not sure how to fix the ABI here. The same issue applies on the
userspace side, so whatever we do we need to fix both sides.
Thanks,
Mark.
Powered by blists - more mailing lists