[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+ZNb_eCLVBz6cUyr0jVPdSW_-nCedcBAh0anfds91B2vw@mail.gmail.com>
Date: Wed, 8 Mar 2017 14:42:10 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mark Rutland <mark.rutland@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Ingo Molnar <mingo@...hat.com>,
kasan-dev <kasan-dev@...glegroups.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
Will Deacon <will.deacon@....com>
Subject: Re: [PATCH] x86, kasan: add KASAN checks to atomic operations
On Mon, Mar 6, 2017 at 9:35 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Mon, Mar 06, 2017 at 04:20:18PM +0000, Mark Rutland wrote:
>> > >> So the problem is doing load/stores from asm bits, and GCC
>> > >> (traditionally) doesn't try and interpret APP asm bits.
>> > >>
>> > >> However, could we not write a GCC plugin that does exactly that?
>> > >> Something that interprets the APP asm bits and generates these KASAN
>> > >> bits that go with it?
>
>> I don't think there's much you'll be able to do within the compiler,
>> assuming you mean to derive this from the asm block inputs and outputs.
>
> Nah, I was thinking about a full asm interpreter.
>
>> Those can hide address-generation (e.g. with per-cpu stuff), which the
>> compiler may erroneously be detected as racing.
>>
>> Those may also take fake inputs (e.g. the sp input to arm64's
>> __my_cpu_offset()) which may confuse matters.
>>
>> Parsing the assembly itself will be *extremely* painful due to the way
>> that's set up for run-time patching.
>
> Argh, yah, completely forgot about all that alternative and similar
> nonsense :/
I think if we scope compiler atomic builtins to KASAN/KTSAN/KMSAN (and
consequently x86/arm64) initially, it becomes more realistic. For the
tools we don't care about absolute efficiency and this gets rid of
Will's points (2), (4) and (6) here https://lwn.net/Articles/691295/.
Re (3) I think rmb/wmb can be reasonably replaced with
atomic_thread_fence(acquire/release). Re (5) situation with
correctness becomes better very quickly as more people use them in
user-space. Since KASAN is not intended to be used in production (or
at least such build is expected to crash), we can afford to shake out
any remaining correctness issues in such build. (1) I don't fully
understand, what exactly is the problem with seq_cst?
I've sketched a patch that does it, and did some testing with/without
KASAN on x86_64.
In short, it adds include/linux/atomic_compiler.h which is included
from include/linux/atomic.h when CONFIG_COMPILER_ATOMIC is defined;
and <asm/atomic.h> is not included when CONFIG_COMPILER_ATOMIC is
defined.
For bitops it is similar except that only parts of asm/bitops.h are
selectively disabled when CONFIG_COMPILER_ATOMIC, because it also
defines other stuff.
asm/barriers.h is left intact for now. We don't need it for KASAN. But
for KTSAN we can do similar thing -- selectively disable some of the
barriers in asm/barriers.h (e.g. leaving dma_rmb/wmb per arch).
Such change would allow us to support atomic ops for multiple arches
for all of KASAN/KTSAN/KMSAN.
Thoughts?
View attachment "atomic_compiler.patch" of type "text/x-patch" (17432 bytes)
Powered by blists - more mailing lists