lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20201203180009.GJ11935@casper.infradead.org> Date: Thu, 3 Dec 2020 18:00:09 +0000 From: Matthew Wilcox <willy@...radead.org> To: Andy Lutomirski <luto@...capital.net> Cc: Florian Weimer <fweimer@...hat.com>, Topi Miettinen <toiwoton@...il.com>, linux-hardening@...r.kernel.org, akpm@...ux-foundation.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org, Jann Horn <jannh@...gle.com>, Kees Cook <keescook@...omium.org>, Mike Rapoport <rppt@...nel.org>, Linux API <linux-api@...r.kernel.org> Subject: Re: [PATCH v5] mm: Optional full ASLR for mmap(), mremap(), vdso and stack On Thu, Dec 03, 2020 at 09:42:54AM -0800, Andy Lutomirski wrote: > I suspect that something much more clever could be done in which the heap is divided up into a few independently randomized sections and heap pages are randomized within the sections might do much better. There should certainly be a lot of room for something between what we have now and a fully randomized scheme. > > It might also be worth looking at what other OSes do. How about dividing the address space up into 1GB sections (or, rather, PUD_SIZE sections), allocating from each one until it's 50% full, then choose another one? Sufficiently large allocations would ignore this division and just look for any space. I'm thinking something like the slab allocator (so the 1GB chunk would go back into the allocatable list when >50% of it was empty). That might strike a happy medium between full randomisation and efficient use of page tables / leaving large chunks of address space free for large mmaps.
Powered by blists - more mailing lists