lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5j+PmPUg9-gx3769X+kjXJo+markuPA3FwredM7jU8CdXg@mail.gmail.com>
Date:   Wed, 18 Apr 2018 09:38:07 -0700
From:   Kees Cook <keescook@...omium.org>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Andy Lutomirski <luto@...nel.org>,
        Laura Abbott <labbott@...hat.com>,
        Rasmus Villemoes <rasmus.villemoes@...vas.dk>,
        LKML <linux-kernel@...r.kernel.org>,
        Kernel Hardening <kernel-hardening@...ts.openwall.com>
Subject: Re: [PATCH v2] fork: Unconditionally clear stack on fork

On Wed, Feb 21, 2018 at 6:15 PM, Kees Cook <keescook@...omium.org> wrote:
> On Wed, Feb 21, 2018 at 12:59 PM, Andrew Morton
> <akpm@...ux-foundation.org> wrote:
>> On Wed, 21 Feb 2018 11:29:33 +0100 Michal Hocko <mhocko@...nel.org> wrote:
>>
>>> On Tue 20-02-18 18:16:59, Kees Cook wrote:
>>> > One of the classes of kernel stack content leaks[1] is exposing the
>>> > contents of prior heap or stack contents when a new process stack is
>>> > allocated. Normally, those stacks are not zeroed, and the old contents
>>> > remain in place. In the face of stack content exposure flaws, those
>>> > contents can leak to userspace.
>>> >
>>> > Fixing this will make the kernel no longer vulnerable to these flaws,
>>> > as the stack will be wiped each time a stack is assigned to a new
>>> > process. There's not a meaningful change in runtime performance; it
>>> > almost looks like it provides a benefit.
>>> >
>>> > Performing back-to-back kernel builds before:
>>> >     Run times: 157.86 157.09 158.90 160.94 160.80
>>> >     Mean: 159.12
>>> >     Std Dev: 1.54
>>> >
>>> > and after:
>>> >     Run times: 159.31 157.34 156.71 158.15 160.81
>>> >     Mean: 158.46
>>> >     Std Dev: 1.46
>>>
>>> /bin/true or similar would be more representative for the worst case
>>> but it is good to see that this doesn't have any visible effect on
>>> a more real usecase.
>>
>> Yes, that's a pretty large memset.  And while it will populate the CPU
>> cache with the stack contents, doing so will evict other things.
>>
>> So some quite careful quantitative testing is needed here, methinks.
>
> Well, I did some more with perf and cycle counts on running 100,000
> execs of /bin/true.
>
> before:
> Cycles: 218858861551 218853036130 214727610969 227656844122 224980542841
> Mean:  221015379122.60
> Std Dev: 4662486552.47
>
> after:
> Cycles: 213868945060 213119275204 211820169456 224426673259 225489986348
> Mean:  217745009865.40
> Std Dev: 5935559279.99
>
> It continues to look like it's faster, though the deviation is rather
> wide, but I'm not sure what I could do that would be less noisy. I'm
> open to ideas!

Friendly ping. Andrew, can you add this to -mm?

-Kees

-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ