lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 11 May 2019 15:45:19 -0700
From:   Andy Lutomirski <luto@...nel.org>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     "Reshetova, Elena" <elena.reshetova@...el.com>,
        David Laight <David.Laight@...lab.com>,
        Andy Lutomirski <luto@...nel.org>,
        "Theodore Ts'o" <tytso@....edu>,
        Eric Biggers <ebiggers3@...il.com>,
        "ebiggers@...gle.com" <ebiggers@...gle.com>,
        "herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
        Peter Zijlstra <peterz@...radead.org>,
        "keescook@...omium.org" <keescook@...omium.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "jpoimboe@...hat.com" <jpoimboe@...hat.com>,
        "jannh@...gle.com" <jannh@...gle.com>,
        "Perla, Enrico" <enrico.perla@...el.com>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "bp@...en8.de" <bp@...en8.de>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
        "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH] x86/entry/64: randomize kernel stack offset upon syscall

On Thu, May 9, 2019 at 1:43 AM Ingo Molnar <mingo@...nel.org> wrote:
>
>
> * Reshetova, Elena <elena.reshetova@...el.com> wrote:
>
> > > I find it ridiculous that even with 4K blocked get_random_bytes(),
> > > which gives us 32k bits, which with 5 bits should amortize the RNG
> > > call to something like "once per 6553 calls", we still see 17%
> > > overhead? It's either a measurement artifact, or something doesn't
> > > compute.
> >
> > If you check what happens underneath of get_random_bytes(), there is a
> > fair amount of stuff that is going on, including reseeding CRNG if
> > reseeding interval has passed (see _extract_crng()). It also even
> > attempts to stir in more entropy from rdrand if avalaible:
> >
> > I will look into this whole construction slowly now to investigate. I
> > did't optimize anything yet also (I take 8 bits at the time for
> > offset), but these small optimization won't make performance impact
> > from 17% --> 2%, so pointless for now, need a more radical shift.
>
> So assuming that the 17% overhead primarily comes from get_random_bytes()
> (does it? I don't know), that's incredibly slow for something like the
> system call entry path, even if it's batched.
>

ISTM maybe a better first step would be to make get_random_bytes() be
much faster? :)

--Andy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ