lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 25 Jun 2017 12:56:21 -0700 From: Kees Cook <keescook@...omium.org> To: Christoph Lameter <cl@...ux.com>, Andrew Morton <akpm@...ux-foundation.org> Cc: Laura Abbott <labbott@...hat.com>, Daniel Micay <danielmicay@...il.com>, Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>, Ingo Molnar <mingo@...nel.org>, Josh Triplett <josh@...htriplett.org>, Andy Lutomirski <luto@...nel.org>, Nicolas Pitre <nicolas.pitre@...aro.org>, Tejun Heo <tj@...nel.org>, Daniel Mack <daniel@...que.org>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>, Sergey Senozhatsky <sergey.senozhatsky@...il.com>, Helge Deller <deller@....de>, Rik van Riel <riel@...hat.com>, LKML <linux-kernel@...r.kernel.org>, Linux-MM <linux-mm@...ck.org>, "kernel-hardening@...ts.openwall.com" <kernel-hardening@...ts.openwall.com> Subject: Re: [PATCH v2] mm: Add SLUB free list pointer obfuscation On Thu, Jun 22, 2017 at 6:50 PM, Kees Cook <keescook@...omium.org> wrote: > This SLUB free list pointer obfuscation code is modified from Brad > Spengler/PaX Team's code in the last public patch of grsecurity/PaX based > on my understanding of the code. Changes or omissions from the original > code are mine and don't reflect the original grsecurity/PaX code. > > This adds a per-cache random value to SLUB caches that is XORed with > their freelist pointers. This adds nearly zero overhead and frustrates the > very common heap overflow exploitation method of overwriting freelist > pointers. A recent example of the attack is written up here: > http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit BTW, to quantify "nearly zero overhead", I ran multiple 200-run cycles of "hackbench -g 20 -l 1000", and saw: before: mean 10.11882499999999999995 variance .03320378329145728642 stdev .18221905304181911048 after: mean 10.12654000000000000014 variance .04700556623115577889 stdev .21680767106160192064 The difference gets lost in the noise, but if the above is sensible, it's 0.07% slower. ;) -Kees -- Kees Cook Pixel Security
Powered by blists - more mailing lists