lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D2E35753.2BB9D%khalidm@cisco.com>
Date:	Fri, 12 Feb 2016 18:01:53 +0000
From:	"Khalid Mughal (khalidm)" <khalidm@...co.com>
To:	Rik van Riel <riel@...hat.com>,
	"Daniel Walker (danielwa)" <danielwa@...co.com>,
	Dave Hansen <dave.hansen@...el.com>,
	Johannes Weiner <hannes@...xchg.org>
CC:	Alexander Viro <viro@...iv.linux.org.uk>,
	Michal Hocko <mhocko@...e.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"xe-kernel@...ernal.cisco.com" <xe-kernel@...ernal.cisco.com>
Subject: Re: computing drop-able caches

I did an experiment on our system. I added a small kernel patch, as
mentioned by Daniel in first email of this thread, to compute the
droppable pagecache without actually dropping it. Using this value I
computed the Available memory by adding droppable-page count to memFree
count. Then used a test application to analyze the difference between
memAvailable, and dropcacheinfo output.
The application creates 32 threads. Each allocates 64MB block using
malloc(). There is 5 second interval between each allocation, which allows
me to gather data. After all allocation is done, the threads then write
data to these blocks, using memset. This is also done incrementally,
allowing me track meminfo and dropcacheinfo.
If you look at the attached pdf, you will notice that OOM messages start
to appear when memAvailable is showing 253MB (259228 KB) Free, memFree is
13.5MB (14008 KB) Free, and dropcache based calculation ³Available memory²
is showing 21MB (21720 KB) Free.

So, it appears that memAvailable is not as accurate, especially if data is
used to warn user about system running low on memory.

-KM



On 2/11/16, 2:11 PM, "Rik van Riel" <riel@...hat.com> wrote:

>On Wed, 2016-02-10 at 11:11 -0800, Daniel Walker wrote:
>> On 02/10/2016 10:13 AM, Dave Hansen wrote:
>> > On 02/10/2016 10:04 AM, Daniel Walker wrote:
>> > > > [Linux_0:/]$ echo 3 > /proc/sys/vm/drop_caches
>> > > > [Linux_0:/]$ cat /proc/meminfo
>> > > > MemTotal:        3977836 kB
>> > > > MemFree:         1095012 kB
>> > > > MemAvailable:    1434148 kB
>> > > I suspect MemAvailable takes into account more than just the
>> > > droppable
>> > > caches. For instance, reclaimable slab is included, but I don't
>> > > think
>> > > drop_caches drops that part.
>> > There's a bit for page cache and a bit for slab, see:
>> > 
>> > 	https://kernel.org/doc/Documentation/sysctl/vm.txt
>> > 
>> > 
>> 
>> Ok, then this looks like a defect then. I would think MemAvailable
>> would 
>> always be smaller then MemFree (after echo 3 >
>> /proc/sys/vm/drop_caches).. Unless there is something else be
>> accounted 
>> for that we aren't aware of.
>
>echo 3 > /proc/sys/vm/drop_caches will only
>drop unmapped page cache, IIRC
>
>The system may still have a number of page
>cache pages left that are mapped in processes,
>but will be reclaimable if the VM needs the
>memory for something else.
>
>-- 
>All rights reversed


Download attachment "dropcacheinfo_data.pdf" of type "application/pdf" (48742 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ