lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20201008170728.GK20115@casper.infradead.org> Date: Thu, 8 Oct 2020 18:07:28 +0100 From: Matthew Wilcox <willy@...radead.org> To: Topi Miettinen <toiwoton@...il.com> Cc: linux-hardening@...r.kernel.org, akpm@...ux-foundation.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org Subject: Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap() On Thu, Oct 08, 2020 at 07:54:08PM +0300, Topi Miettinen wrote: > +3 Additionally enable full randomization of memory mappings created > + with mmap(NULL, ...). With 2, the base of the VMA used for such > + mappings is random, but the mappings are created in predictable > + places within the VMA and in sequential order. With 3, new VMAs > + are created to fully randomize the mappings. Also mremap(..., > + MREMAP_MAYMOVE) will move the mappings even if not necessary. > + > + On 32 bit systems this may cause problems due to increased VM > + fragmentation if the address space gets crowded. On all systems, it will reduce performance and increase memory usage due to less efficient use of page tables and inability to merge adjacent VMAs with compatible attributes. > + if ((flags & MREMAP_MAYMOVE) && randomize_va_space >= 3) { > + /* > + * Caller is happy with a different address, so let's > + * move even if not necessary! > + */ > + new_addr = arch_mmap_rnd(); > + > + ret = mremap_to(addr, old_len, new_addr, new_len, > + &locked, flags, &uf, &uf_unmap_early, > + &uf_unmap); > + goto out; > + } > + > + Overly enthusiastic newline
Powered by blists - more mailing lists