lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <871rg6yf1i.fsf@oldenburg2.str.redhat.com>
Date:   Thu, 03 Dec 2020 18:28:57 +0100
From:   Florian Weimer <fweimer@...hat.com>
To:     Andy Lutomirski <luto@...capital.net>
Cc:     Topi Miettinen <toiwoton@...il.com>,
        linux-hardening@...r.kernel.org, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Jann Horn <jannh@...gle.com>,
        Kees Cook <keescook@...omium.org>,
        Matthew Wilcox <willy@...radead.org>,
        Mike Rapoport <rppt@...nel.org>,
        Linux API <linux-api@...r.kernel.org>
Subject: Re: [PATCH v5] mm: Optional full ASLR for mmap(), mremap(), vdso
 and stack

* Andy Lutomirski:

> If you want a 4GB allocation to succeed, you can only divide the
> address space into 32k fragments.  Or, a little more precisely, if you
> want a randomly selected 4GB region to be empty, any other allocation
> has a 1/32k chance of being in the way.  (Rough numbers — I’m ignoring
> effects of the beginning and end of the address space, and I’m
> ignoring the size of a potential conflicting allocation.).

I think the probability distribution is way more advantageous than that
because it is unlikely that 32K allocations are all exactly spaced 4 GB
apart.  (And with 32K allocations, you are close to the VMA limit anyway.)

My knowledge of probability theory is quite limited, so I have to rely
on simulations.  But I think you would see a 40 GiB gap somewhere for a
47-bit address space with 32K allocations, most of the time.  Which is
not too bad.

But even with a 47 bit address space and just 500 threads (each with at
least a stack and local heap, randomized indepently), simulations
suggestion that the largest gap is often just 850 GB.  At that point,
you can't expect to map your NVDIMM (or whatever) in a single mapping
anymore, and you have to code around that.

Not randomizing large allocations and sacrificing one bit of randomness
for small allocations would avoid this issue, though.

(I still expect page walking performance to suffer drastically, with or
without this tweak.  I assume page walking uses the CPU cache hierarchy
today, and with full randomization, accessing page entry at each level
after a TLB miss would result in a data cache miss.  But then, I'm
firmly a software person.)

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ