lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 6 Oct 2015 09:55:33 +0100
From:	Linus Torvalds <>
To:	"Eric W. Biederman" <>
Cc:	Michal Hocko <>,
	Tetsuo Handa <>,
	David Rientjes <>,
	Oleg Nesterov <>,
	Kyle Walker <>,
	Christoph Lameter <>,
	Andrew Morton <>,
	Johannes Weiner <>,
	Vladimir Davydov <>,
	linux-mm <>,
	Linux Kernel Mailing List <>,
	Stanislav Kozina <>
Subject: Re: can't oom-kill zap the victim's memory?

On Tue, Oct 6, 2015 at 9:49 AM, Linus Torvalds
<> wrote:
> The basic fact remains: kernel allocations are so important that
> rather than fail, you should kill user space. Only kernel allocations
> that *explicitly* know that they have fallback code should fail, and
> they should just do the __GFP_NORETRY.

To be clear: "big" orders (I forget if the limit is at order-3 or
order-4) do fail much more aggressively. But no, we do not limit retry
to just order-0, because even small kmalloc sizes tend to often do
order-1 or order-2 just because of memory packing issues (ie trying to
pack into a single page wastes too much memory if the allocation sizes
don't come out right).

So no, order-0 isn't special. 1/2 are rather important too.

[ Checking /proc/slabinfo: it looks like several slabs are order-3,
for things like files_cache, signal_cache and sighand_cache for me at
least. So I think it's up to order-3 that we basically need to
consider "we'll need to shrink user space aggressively unless we have
an explicit fallback for the allocation" ]

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists