lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Oct 2018 09:44:35 -0700
From:   Dan Williams <dan.j.williams@...el.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Kees Cook <keescook@...omium.org>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 0/3] Randomize free memory

Hi Michal,

On Thu, Oct 4, 2018 at 12:53 AM Michal Hocko <mhocko@...nel.org> wrote:
>
> On Wed 03-10-18 19:15:18, Dan Williams wrote:
> > Changes since v1:
> > * Add support for shuffling hot-added memory (Andrew)
> > * Update cover letter and commit message to clarify the performance impact
> >   and relevance to future platforms
>
> I believe this hasn't addressed my questions in
> http://lkml.kernel.org/r/20181002143015.GX18290@dhcp22.suse.cz. Namely
> "
> It is the more general idea that I am not really sure about. First of
> all. Does it make _any_ sense to randomize 4MB blocks by default? Why
> cannot we simply have it disabled?

I'm not aware of any CVE that this would directly preclude, but that
said the entropy injected at 4MB boundaries raises the bar on heap
attacks. Environments that want more can adjust that with the boot
parameter. Given the potential benefits I think it would only make
sense to default disable it if there was a significant runtime impact,
from what I have seen there isn't.

> Then and more concerning question is,
> does it even make sense to have this randomization applied to higher
> orders than 0? Attacker might fragment the memory and keep recycling the
> lowest order and get the predictable behavior that we have right now.

Certainly I expect there are attacks that can operate within a 4MB
window, as I expect there are attacks that could operate within a 4K
window that would need sub-page randomization to deter. In fact I
believe that is the motivation for CONFIG_SLAB_FREELIST_RANDOM.
Combining that with page allocator randomization makes the kernel less
predictable.

Is that enough justification for this patch on its own? It's
debatable. Combine that though with the wider availability of
platforms with memory-side-cache and I think it's a reasonable default
behavior for the kernel to deploy.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ