lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200903111439.57510.thomas.schoebel-theuer@1und1.de>
Date:	Wed, 11 Mar 2009 14:39:56 +0100
From:	"Thomas Schoebel-Theuer" <thomas.schoebel-theuer@...d1.de>
To:	jack marrow <jackmarrow2@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: Memory usage per memory zone

Am Mittwoch, 11. März 2009 11:41:43 schrieb jack marrow:
> I have a box where the oom-killer is killing processes due to running
> out of memory in zone_normal. I can see using slabtop that the inode
> caches are using up lots of memory and guess this is the problem, so
> have cleared them using an echo to drop_caches.

Hi Jack,

my experience with plain old 2.6.24 on 32bit _production_ boxes was that under 
heavy load and after >30days uptime we saw an sudden inflation of oom-killers 
on some of them until those boxes died. The standard kernel statistics about 
memory looked much the same as yours (and I suspect they could have been 
wrong or at least misleading, but I have neither checked nor tried to fix).

> is it possible to use slabtop
> (or any other way) to view ram usage per zone so I can pick out the
> culprit?

Try the attached experimental hack which can provide you with some insight 
about whats really going on in the _physical_ memory. Since it does not 
allocate any memory for the purpose of displaying those memory patterns it 
wants to examine, you have to allocate a large enough buffer in userspace. 
Don't use cat, but something like dd with parameters such as bs=4M (as 
mentioned in the comment). Probably you have to adjust the patch to some 
newer kernel versions, and/or to fix some sysctl table checks if you want to 
get it upstreams (I will not). And, of course, you can visualize more/other 
flags as well.

After gaining some insight with /proc/sys/vm/mempattern and some development 
of further experimental patches which successfully reduced fragmentation (but 
finally only _delayed_ the oom problems without being able to _fundamentally_ 
resolve them), the ultimate solution was just to use CONFIG_VMSPLIT_2G or 
even CONFIG_VMSPLIT_1G in order to overcome the artificial shortening of 
zone_normal.

This supports old wisdom that an OS cannot give you resources it just does not 
possess... Just make sure you have enough resources for your working set. 
Thats all.

In the hope of being helpful,

Thomas

View attachment "fragmentation-visualisation-v2.patch" of type "text/x-diff" (5441 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ