lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 30 Apr 2009 20:09:16 -0700
From:	Elladan <elladan@...imo.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Elladan <elladan@...imo.com>, riel@...hat.com,
	peterz@...radead.org, linux-kernel@...r.kernel.org, tytso@....edu,
	kosaki.motohiro@...fujitsu.com, linux-mm@...ck.org
Subject: Re: [PATCH] vmscan: evict use-once pages first (v2)

On Thu, Apr 30, 2009 at 05:45:36PM -0700, Andrew Morton wrote:
> On Thu, 30 Apr 2009 00:20:58 -0700
> Elladan <elladan@...imo.com> wrote:
> 
> > > Elladan, does this smaller patch still work as expected?
> > 
> > Rik, since the third patch doesn't work on 2.6.28 (without disabling a lot of
> > code), I went ahead and tested this patch.
> > 
> > The system does seem relatively responsive with this patch for the most part,
> > with occasional lag.  I don't see much evidence at least over the course of a
> > few minutes that it pages out applications significantly.  It seems about
> > equivalent to the first patch.
> > 
> > Given Andrew Morton's request that I track the Mapped: field in /proc/meminfo,
> > I went ahead and did that with this patch built into a kernel.  Compared to the
> > standard Ubuntu kernel, this patch keeps significantly more Mapped memory
> > around, and it shrinks at a slower rate after the test runs for a while.
> > Eventually, it seems to reach a steady state.
> > 
> > For example, with your patch, Mapped will often go for 30 seconds without
> > changing significantly.  Without your patch, it continuously lost about
> > 500-1000K every 5 seconds, and then jumped up again significantly when I
> > touched Firefox or other applications.  I do see some of that behavior with
> > your patch too, but it's much less significant.
> 
> Were you able to tell whether altering /proc/sys/vm/swappiness appropriately
> regulated the rate at which the mapped page count decreased?

I don't believe so.  I tested with swappiness=0 and =60, and in each case the
mapped pages continued to decrease.  I don't know at what rate though.  If
you'd like more precise data, I can rerun the test with appropriate logging.  I
admit my "Hey, latency is terrible and mapped pages is decreasing" testing is
somewhat unscientific.

I get the impression that VM regressions happen fairly regularly.  Does anyone
have good unit tests for this?  Is seems like a difficult problem, since it's
partly based on pattern and partly timing.

-J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ