lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 10 May 2010 14:18:44 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	"Alexander Stohr" <Alexander.Stohr@....de>
Cc:	linux-kernel@...r.kernel.org, trond.myklebust@....uio.no,
	riel@...hat.com, major@...x.net
Subject: Re: [BUG?] vfs_cache_pressure=0 does not free inode caches

On Mon, 10 May 2010 19:26:21 +0200
"Alexander Stohr" <Alexander.Stohr@....de> wrote:

> this is a follow up to:
> http://lkml.indiana.edu/hypermail/linux/kernel/0904.1/03026.html
> 
> > The server is going to die a slow death,
> > all user space memory is swapped out,
> > then all processes are OOM killed 
> > until it dies from complete memory exhaustion."
> 
> > a cache is supposed to be a cache and not a memory hog
> 
> i'm running an embedded system with NFS as my working area.
> the system has only few ram leftover, any MiBi counts.
> 
> my current best guess to resolve low memory situations
> is a manual one (no, i could not see any smart kernel reaction
> with that relatively old but patched 2.6.18 kernel) is this:
> 
> echo 100000 >/proc/sys/vm/vfs_cache_pressure
> sync
> echo 1 >/proc/sys/vm/drop_caches
> echo 2 >/proc/sys/vm/drop_caches
> 
> any hints on that?
> is this still an issue in current kernels
> or is this already addressed in some way?
> 

I'm not sure what to say, really.

If you tell the kernel not to reclaim inode/dentry caches then it will
do what you asked.  It _sounds_ like you're looking for more aggressive
reclaim of the VFS caches when the system is getting low on memory. 
Perhaps this can be done by _increasing_ vfs_cache_pressure.  But the
kernel should wring the last drop out of the VFS caches before
declaring OOM anyway - if it isn't doing that, we should fix it.

Perhaps you could tell us exactly what behaviour you're observing, and
how it differs from what you'd like to see.

> 
> 
> here is the link to the initial patch set applied to 2.6.8:
> http://git.kernel.org/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=commit;h=95afb3658a8217ff2c262e202601340323ef2803
> 
> some other people spotting similar effects:
> http://rackerhacker.com/2008/12/03/reducing-inode-and-dentry-caches-to-keep-oom-killer-at-bay/

That page says "If you are writing data at the time you run these
commands, you'll actually be dumping the data out of the filesystem
cache before it reaches the disk, which could lead to very bad things".
That had better not be true!  That would be a bad bug.  drop_caches
only drops stuff which has been written back.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ