lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090420135303.75471bc1.akpm@linux-foundation.org>
Date:	Mon, 20 Apr 2009 13:53:03 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org, riel@...hat.com,
	hugh@...itas.com
Subject: Re: [patch 3/3][rfc] vmscan: batched swap slot allocation

On Mon, 20 Apr 2009 22:31:19 +0200
Johannes Weiner <hannes@...xchg.org> wrote:

> A test program creates an anonymous memory mapping the size of the
> system's RAM (2G).  It faults all pages of it linearly, then kicks off
> 128 reclaimers (on 4 cores) that map, fault and unmap 2G in sum and
> parallel, thereby evicting the first mapping onto swap.
> 
> The time is then taken for the initial mapping to get faulted in from
> swap linearly again, thus measuring how bad the 128 reclaimers
> distributed the pages on the swap space.
> 
>   Average over 5 runs, standard deviation in parens:
> 
>       swap-in          user            system            total
> 
> old:  74.97s (0.38s)   0.52s (0.02s)   291.07s (3.28s)   2m52.66s (0m1.32s)
> new:  45.26s (0.68s)   0.53s (0.01s)   250.47s (5.17s)   2m45.93s (0m2.63s)
> 
> where old is current mmotm snapshot 2009-04-17-15-19 and new is these
> three patches applied to it.
> 
> Test program attached.  Kernbench didn't show any differences on my
> single core x86 laptop with 256mb ram (poor thing).

qsbench is pretty good at fragmenting swapspace.  It would be vaguely
interesting to see what effect you've had on its runtime.

I've found that qsbench's runtimes are fairly chaotic when it's
operating at the transition point between all-in-core and
madly-swapping, so a bit of thought and caution is needed.

I used to run it with

	./qsbench -p 4 -m 96

on a 256MB machine and it had sufficiently repeatable runtimes to be
useful.

There's a copy of qsbench in
http://userweb.kernel.org/~akpm/stuff/ext3-tools.tar.gz


I wonder what effect this patch has upon hibernate/resume performance.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ