lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 07 Dec 2009 13:17:36 -0500
From:	Rik van Riel <riel@...hat.com>
To:	Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Elladan <elladan@...imo.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Lee Schermerhorn <lee.schermerhorn@...com>,
	Johannes Weiner <hannes@...xchg.org>,
	Andrew Morton <akpm@...ux-foundation.org>, epasch@...ibm.com,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>
Subject: Re: Increased Buffers due to patch 56e49d (vmscan: evict use-once
 pages first), but why exactly?

On 12/07/2009 09:36 AM, Christian Ehrhardt wrote:
> Hi,
> commit 56e49d - "vmscan: evict use-once pages first" changed behavior of
> memory management quite a bit which should be fine.
> But while tracking down a performance regression I was on the wrong path
> for a while suspecting this patch is causing the regression.
> Fortunately this was not the case, but I got some interesting data which
> I couldn't explain completely and I thought maybe its worth to get it
> clarified publicly in case someone else looks at similar data again :-)
>
> All is about the increased amount of "Buffers" accounted as active while
> loosing the same portion from "Cache" accounted as inactive in
> /proc/meminfo.
> I understand that with the patch applied there will be some more
> pressure to file pages until the balance of active/inactive file pages
> is reached.
> But I didn't get how this prefers buffers compared to cache pages (I
> assume dropping inactive before active was the case all the time so that
> can't be the only difference between buffers/cache).

Well, "Buffers" is the same kind of memory as "Cached", with
the only difference being that "Cached" is associated with
files, while "Buffers" is associated with a block device.

This means that "Buffers" is more likely to contain filesystem
metadata, while "Cached" is more likely to contain file data.

Not putting pressure on the active file list if there are a
large number of inactive file pages means that pages which were
accessed more than once get protected more from pages that were
only accessed once.

My guess is that "Buffers" is larger because the VM now caches
more (frequently used) filesystem metadata, at the expense of
caching less (used once) file data.

> The scenario I'm running is a low memory system (256M total), that does
> sequential I/O with parallel iozone processes.

This indeed sounds like the kind of workload that would only
access the file data very infrequently, while accessing the
filesystem metadata all the time.

> But I can't really see in the code where buffers are favored in
> comparison to cached pages - (it very probably makes sense to do so, as
> they might contain e.g. the inode data about the files in cache).

You are right that the code does not favor Buffers or Cache
over the other, but treats both kinds of pages the same.

I believe that you are just seeing the effect of code that
better protects the frequently accessed metadata from the
infrequently accessed data.

-- 
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ