lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 28 Sep 2018 16:41:42 +0100
From:   Alan Cox <gnomes@...rguk.ukuu.org.uk>
To:     Marcus Linsner <constantoverride@...il.com>
Cc:     linux-kernel@...r.kernel.org
Subject: Re: Howto prevent kernel from evicting code pages ever? (to avoid
 disk thrashing when about to run out of RAM)

On Wed, 22 Aug 2018 11:25:35 +0200
Marcus Linsner <constantoverride@...il.com> wrote:

> Hi. How to make the kernel keep(lock?) all code pages in RAM so that
> kswapd0 won't evict them when the system is under low memory
> conditions ?
> 
> The purpose of this is to prevent the kernel from causing lots of disk
> reads(effectively freezing the whole system) when about to run out of
> RAM, even when there is no swap enabled, but well before(in real time
> minutes) OOM-killer triggers to kill the offending process (eg. ld)!

Having no swap is not helping you at all.

In Linux you can do several things. Firstly add some swap - even a
swap file because if you have no swap you fill up memory with pages that
are not backed by disk and the kernel has to pick less and less optimal
things to swap out so begins to thrash. Even slowish swap is better than
no swap as it can dump out little used data pages.

You can tune the OOM killer to taste and you can even guide it on what to
shoot first.

You can use cgroups to constrain the resources some group of things are
allowed to use.

You can play with no overcommit mode, although that is much more about
'cannot fail' embedded applications usually. In that mode the kernel
tightly constrains the resource overcommit permissible. It's very
conservative and you end up needing a lot of 'just in case' wasted
resource, although you can tune the amount you leverage the real
resources.

To be fair Linux *is* really bad at handling this case. What other systems
did (being older and from the days where RAM wasn't reasonably assumed
infinite) was two fold.  The first was under high swap load to switch to
swapping out entire processes, which with all the shared resources and
fast I/O today isn't quite so relevant. The second was that it would
ensure a process got a certain amount of real CPU time before it's pages
could be booted out again (and would then boot out lots of them). That
turns the thrashing into forward progress but still feels unpleasant.
However you still need swap or you don't have anywhere to boot out all
the dirty non code pages in order to manage progress.

There is a reason swap exists. If you don't have enough RAM to run
smoothly without swap, add swap (or RAM). Even then some things usually
need swap - I've got things that make the compiler consume over 16GB
building one file. With swap it's fine even on a 4GB machine.

Alan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ