lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+YQ+3i=gY5M8UjFnw4NqR1x3XcEexUDcMMLfY_mV6TaQg@mail.gmail.com>
Date:   Fri, 24 Mar 2017 09:39:34 +0100
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Mark Rutland <mark.rutland@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Ingo Molnar <mingo@...hat.com>,
        Will Deacon <will.deacon@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        kasan-dev <kasan-dev@...glegroups.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "x86@...nel.org" <x86@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH 2/3] asm-generic, x86: wrap atomic operations

On Fri, Mar 24, 2017 at 8:14 AM, Dmitry Vyukov <dvyukov@...gle.com> wrote:
> On Fri, Mar 24, 2017 at 7:52 AM, Ingo Molnar <mingo@...nel.org> wrote:
>>
>> * Dmitry Vyukov <dvyukov@...gle.com> wrote:
>>
>>> KASAN uses compiler instrumentation to intercept all memory accesses.
>>> But it does not see memory accesses done in assembly code.
>>> One notable user of assembly code is atomic operations. Frequently,
>>> for example, an atomic reference decrement is the last access to an
>>> object and a good candidate for a racy use-after-free.
>>>
>>> Atomic operations are defined in arch files, but KASAN instrumentation
>>> is required for several archs that support KASAN. Later we will need
>>> similar hooks for KMSAN (uninit use detector) and KTSAN (data race
>>> detector).
>>>
>>> This change introduces wrappers around atomic operations that can be
>>> used to add KASAN/KMSAN/KTSAN instrumentation across several archs.
>>> This patch uses the wrappers only for x86 arch. Arm64 will be switched
>>> later.
>>>
>>> Signed-off-by: Dmitry Vyukov <dvyukov@...gle.com>
>>> Cc: Mark Rutland <mark.rutland@....com>
>>> Cc: Peter Zijlstra <peterz@...radead.org>
>>> Cc: Will Deacon <will.deacon@....com>,
>>> Cc: Andrew Morton <akpm@...ux-foundation.org>,
>>> Cc: Andrey Ryabinin <aryabinin@...tuozzo.com>,
>>> Cc: Ingo Molnar <mingo@...hat.com>,
>>> Cc: kasan-dev@...glegroups.com
>>> Cc: linux-mm@...ck.org
>>> Cc: linux-kernel@...r.kernel.org
>>> Cc: x86@...nel.org
>>> ---
>>>  arch/x86/include/asm/atomic.h             | 100 +++++++-------
>>>  arch/x86/include/asm/atomic64_32.h        |  86 ++++++------
>>>  arch/x86/include/asm/atomic64_64.h        |  90 ++++++-------
>>>  arch/x86/include/asm/cmpxchg.h            |  12 +-
>>>  arch/x86/include/asm/cmpxchg_32.h         |   8 +-
>>>  arch/x86/include/asm/cmpxchg_64.h         |   4 +-
>>>  include/asm-generic/atomic-instrumented.h | 210 ++++++++++++++++++++++++++++++
>>>  7 files changed, 367 insertions(+), 143 deletions(-)
>>
>> Ugh, that's disgusting really...
>>
>>>
>>> diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
>>> index 14635c5ea025..95dd167eb3af 100644
>>> --- a/arch/x86/include/asm/atomic.h
>>> +++ b/arch/x86/include/asm/atomic.h
>>> @@ -16,36 +16,46 @@
>>>  #define ATOMIC_INIT(i)       { (i) }
>>>
>>>  /**
>>> - * atomic_read - read atomic variable
>>> + * arch_atomic_read - read atomic variable
>>>   * @v: pointer of type atomic_t
>>>   *
>>>   * Atomically reads the value of @v.
>>>   */
>>> -static __always_inline int atomic_read(const atomic_t *v)
>>> +static __always_inline int arch_atomic_read(const atomic_t *v)
>>>  {
>>> -     return READ_ONCE((v)->counter);
>>> +     /*
>>> +      * We use READ_ONCE_NOCHECK() because atomic_read() contains KASAN
>>> +      * instrumentation. Double instrumentation is unnecessary.
>>> +      */
>>> +     return READ_ONCE_NOCHECK((v)->counter);
>>>  }
>
> Hello Ingo,
>
>> Firstly, the patch is way too large, please split off new the documentation parts
>> of the patch to reduce the size and to make it easier to read!
>>
>> Secondly, the next patch should do the rename to arch_atomic_*() pattern - and
>> nothing else:
>
> Next after what? Please provide full list of patches as you see them.
> How do we avoid build breakage if we do only the rename in a separate patch?
>
>
>
>>>  /**
>>> - * atomic_set - set atomic variable
>>> + * arch_atomic_set - set atomic variable
>>>   * @v: pointer of type atomic_t
>>>   * @i: required value
>>>   *
>>>   * Atomically sets the value of @v to @i.
>>>   */
>>> -static __always_inline void atomic_set(atomic_t *v, int i)
>>> +static __always_inline void arch_atomic_set(atomic_t *v, int i)
>>
>>
>> Third, the prototype CPP complications:
>>
>>> +#define __INSTR_VOID1(op, sz)                                                \
>>> +static __always_inline void atomic##sz##_##op(atomic##sz##_t *v)     \
>>> +{                                                                    \
>>> +     arch_atomic##sz##_##op(v);                                      \
>>> +}
>>> +
>>> +#define INSTR_VOID1(op)      \
>>> +__INSTR_VOID1(op,);  \
>>> +__INSTR_VOID1(op, 64)
>>> +
>>> +INSTR_VOID1(inc);
>>> +INSTR_VOID1(dec);
>>> +
>>> +#undef __INSTR_VOID1
>>> +#undef INSTR_VOID1
>>> +
>>> +#define __INSTR_VOID2(op, sz, type)                                  \
>>> +static __always_inline void atomic##sz##_##op(type i, atomic##sz##_t *v)\
>>> +{                                                                    \
>>> +     arch_atomic##sz##_##op(i, v);                                   \
>>> +}
>>> +
>>> +#define INSTR_VOID2(op)              \
>>> +__INSTR_VOID2(op, , int);    \
>>> +__INSTR_VOID2(op, 64, long long)
>>> +
>>> +INSTR_VOID2(add);
>>> +INSTR_VOID2(sub);
>>> +INSTR_VOID2(and);
>>> +INSTR_VOID2(or);
>>> +INSTR_VOID2(xor);
>>> +
>>> +#undef __INSTR_VOID2
>>> +#undef INSTR_VOID2
>>> +
>>> +#define __INSTR_RET1(op, sz, type, rtype)                            \
>>> +static __always_inline rtype atomic##sz##_##op(atomic##sz##_t *v)    \
>>> +{                                                                    \
>>> +     return arch_atomic##sz##_##op(v);                               \
>>> +}
>>> +
>>> +#define INSTR_RET1(op)               \
>>> +__INSTR_RET1(op, , int, int);        \
>>> +__INSTR_RET1(op, 64, long long, long long)
>>> +
>>> +INSTR_RET1(inc_return);
>>> +INSTR_RET1(dec_return);
>>> +__INSTR_RET1(inc_not_zero, 64, long long, long long);
>>> +__INSTR_RET1(dec_if_positive, 64, long long, long long);
>>> +
>>> +#define INSTR_RET_BOOL1(op)  \
>>> +__INSTR_RET1(op, , int, bool);       \
>>> +__INSTR_RET1(op, 64, long long, bool)
>>> +
>>> +INSTR_RET_BOOL1(dec_and_test);
>>> +INSTR_RET_BOOL1(inc_and_test);
>>> +
>>> +#undef __INSTR_RET1
>>> +#undef INSTR_RET1
>>> +#undef INSTR_RET_BOOL1
>>> +
>>> +#define __INSTR_RET2(op, sz, type, rtype)                            \
>>> +static __always_inline rtype atomic##sz##_##op(type i, atomic##sz##_t *v) \
>>> +{                                                                    \
>>> +     return arch_atomic##sz##_##op(i, v);                            \
>>> +}
>>> +
>>> +#define INSTR_RET2(op)               \
>>> +__INSTR_RET2(op, , int, int);        \
>>> +__INSTR_RET2(op, 64, long long, long long)
>>> +
>>> +INSTR_RET2(add_return);
>>> +INSTR_RET2(sub_return);
>>> +INSTR_RET2(fetch_add);
>>> +INSTR_RET2(fetch_sub);
>>> +INSTR_RET2(fetch_and);
>>> +INSTR_RET2(fetch_or);
>>> +INSTR_RET2(fetch_xor);
>>> +
>>> +#define INSTR_RET_BOOL2(op)          \
>>> +__INSTR_RET2(op, , int, bool);               \
>>> +__INSTR_RET2(op, 64, long long, bool)
>>> +
>>> +INSTR_RET_BOOL2(sub_and_test);
>>> +INSTR_RET_BOOL2(add_negative);
>>> +
>>> +#undef __INSTR_RET2
>>> +#undef INSTR_RET2
>>> +#undef INSTR_RET_BOOL2
>>
>> Are just utterly disgusting that turn perfectly readable code into an unreadable,
>> unmaintainable mess.
>>
>> You need to find some better, cleaner solution please, or convince me that no such
>> solution is possible. NAK for the time being.
>
> Well, I can just write all functions as is. Does it better confirm to
> kernel style? I've just looked at the x86 atomic.h and it uses macros
> for similar purpose (ATOMIC_OP/ATOMIC_FETCH_OP), so I thought that
> must be idiomatic kernel style...


Stephen Rothwell reported that this patch conflicts with:
  a9ebf306f52c ("locking/atomic: Introduce atomic_try_cmpxchg()")
  e6790e4b5d5e ("locking/atomic/x86: Use atomic_try_cmpxchg()")
does it make sense to base my patch on the tree where these patches
were added and then submit to that tree?

I've also sent 2 fixes for this patch, if I resent this I also squash
these fixes, right?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ