[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <330696c0-90c6-27de-5eb3-4da2159fdfbc@virtuozzo.com>
Date: Tue, 15 Jan 2019 20:07:07 +0300
From: Andrey Ryabinin <aryabinin@...tuozzo.com>
To: Dmitry Vyukov <dvyukov@...gle.com>,
Christophe Leroy <christophe.leroy@....fr>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Alexander Potapenko <glider@...gle.com>,
LKML <linux-kernel@...r.kernel.org>,
linuxppc-dev@...ts.ozlabs.org,
kasan-dev <kasan-dev@...glegroups.com>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
On 1/15/19 2:14 PM, Dmitry Vyukov wrote:
> On Tue, Jan 15, 2019 at 8:27 AM Christophe Leroy
> <christophe.leroy@....fr> wrote:
>> On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
>>> On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
>>> <christophe.leroy@....fr> wrote:
>>> >
>>> > In kernel/cputable.c, explicitly use memcpy() in order
>>> > to allow GCC to replace it with __memcpy() when KASAN is
>>> > selected.
>>> >
>>> > Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
>>> > enabled"), memset() can be used before activation of the cache,
>>> > so no need to use memset_io() for zeroing the BSS.
>>> >
>>> > Signed-off-by: Christophe Leroy <christophe.leroy@....fr>
>>> > ---
>>> > arch/powerpc/kernel/cputable.c | 4 ++--
>>> > arch/powerpc/kernel/setup_32.c | 6 ++----
>>> > 2 files changed, 4 insertions(+), 6 deletions(-)
>>> >
>>> > diff --git a/arch/powerpc/kernel/cputable.c
>>> b/arch/powerpc/kernel/cputable.c
>>> > index 1eab54bc6ee9..84814c8d1bcb 100644
>>> > --- a/arch/powerpc/kernel/cputable.c
>>> > +++ b/arch/powerpc/kernel/cputable.c
>>> > @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
>>> > struct cpu_spec *t = &the_cpu_spec;
>>> >
>>> > t = PTRRELOC(t);
>>> > - *t = *s;
>>> > + memcpy(t, s, sizeof(*t));
>>>
>>> Hi Christophe,
>>>
>>> I understand why you are doing this, but this looks a bit fragile and
>>> non-scalable. This may not work with the next version of compiler,
>>> just different than yours version of compiler, clang, etc.
>>
>> My felling would be that this change makes it more solid.
>>
>> My understanding is that when you do *t = *s, the compiler can use
>> whatever way it wants to do the copy.
>> When you do memcpy(), you ensure it will do it that way and not another
>> way, don't you ?
>
> It makes this single line more deterministic wrt code-gen (though,
> strictly saying compiler can turn memcpy back into inlines
> instructions, it knows memcpy semantics anyway).
> But the problem I meant is that the set of places that are subject to
> this problem is not deterministic. So if we go with this solution,
> after this change it's in the status "works on your machine" and we
> either need to commit to not using struct copies and zeroing
> throughout kernel code or potentially have a long tail of other
> similar cases, and since they can be triggered by another compiler
> version, we may need to backport these changes to previous releases
> too. Whereas if we would go with compiler flags, it would prevent the
> problem in all current and future places and with other past/future
> versions of compilers.
>
The patch will work for any compiler. The point of this patch is to make
memcpy() visible to the preprocessor which will replace it with __memcpy().
After preprocessor's work, compiler will see just __memcpy() call here.
Powered by blists - more mailing lists