lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 30 Sep 2019 13:18:04 +0100
From:   Will Deacon <will@...nel.org>
To:     Masahiro Yamada <yamada.masahiro@...ionext.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Nick Desaulniers <ndesaulniers@...gle.com>,
        Nicolas Saenz Julienne <nsaenzjulienne@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Miguel Ojeda <miguel.ojeda.sandonis@...il.com>,
        linux-arch <linux-arch@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Russell King <rmk+kernel@....linux.org.uk>,
        Stefan Wahren <wahrenst@....net>,
        Kees Cook <keescook@...gle.com>
Subject: Re: [PATCH] compiler: enable CONFIG_OPTIMIZE_INLINING forcibly

On Mon, Sep 30, 2019 at 09:05:11PM +0900, Masahiro Yamada wrote:
> On Mon, Sep 30, 2019 at 8:26 PM Will Deacon <will@...nel.org> wrote:
> > On Fri, Sep 27, 2019 at 03:38:44PM -0700, Linus Torvalds wrote:
> > > Soem of that code is pretty subtle. They have fixed register usage
> > > (but the asm macros actually check them). And the inline asms clobber
> > > the link register, but they do seem to clearly _state_ that they
> > > clobber it, so who knows.
> > >
> > > Just based on the EFAULT, I'd _guess_ that it's some interaction with
> > > the domain access control register (so that get/set_domain() thing).
> > > But I'm not even sure that code is enabled for the Rpi2, so who
> > > knows..
> >
> > FWIW, we've run into issues with CONFIG_OPTIMIZE_INLINING and local
> > variables marked as 'register' where GCC would do crazy things and end
> > up corrupting data, so I suspect the use of fixed registers in the arm
> > uaccess functions is hitting something similar:
> >
> > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91111
> 
> No. Not similar at all.

They're similar in that enabling CONFIG_OPTIMIZE_INLINING causes register
variables to go wrong. I agree that the ARM code looks dodgy with
that call to uaccess_save_and_enable(), but there are __asmeq macros
in there to try to catch that, so it's still very fishy.

> I fixed it already. See
> https://lore.kernel.org/patchwork/patch/1132459/

You fixed the specific case above for 32-bit ARM, but the arm64 case
is due to a compiler bug. As it happens, we've reworked our atomics
in 5.4 so that particular issue no longer triggers, but the fact remains
that GCC has been shown to screw up explicit register allocation for
perfectly legitimate code when giving the flexibility to move code out
of line.

> The problems are fixable by writing correct code.

Right, in the compiler ;)

> I think we discussed this already.

We did?

> - There is nothing arch-specific in CONFIG_OPTIMIZE_INLINING

Apart from the bugs... and even then, that's just based on reports.

> - 'inline' is just a hint. It does not guarantee function inlining.
>   This is standard.
> 
> - The kernel macrofies 'inline' to add __attribute__((__always_inline__))
>   This terrible hack must end.

I'm all for getting rid of hacks, but not at the cost of correctness.

> -  __attribute__((__always_inline__)) takes aways compiler's freedom,
>    and prevents it from optimizing the code for -O2, -Os, or whatever.

s/whatever/miscompiling the code/

If it helps, here is more information about the arm64 failure which
triggered the GCC bugzilla:

https://www.spinics.net/lists/arm-kernel/msg730329.html

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ