[<prev] [next>] [day] [month] [year] [list]
Message-ID: <880965bb-90af-4a0f-9971-6bb8eb9ba2b7@default>
Date: Thu, 10 Jan 2013 14:16:58 -0800 (PST)
From: Dan Magenheimer <dan.magenheimer@...cle.com>
To: Seth Jennings <sjenning@...ux.vnet.ibm.com>
Cc: Nitin Gupta <ngupta@...are.org>, Minchan Kim <minchan@...nel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Dan Magenheimer <dan.magenheimer@...cle.com>,
Robert Jennings <rcj@...ux.vnet.ibm.com>,
Jenifer Hopper <jhopper@...ibm.com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <jweiner@...hat.com>,
Rik van Riel <riel@...hat.com>,
Larry Woodman <lwoodman@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, devel@...verdev.osuosl.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: RE: [PATCHv2 8/9] zswap: add to mm/
> From: Seth Jennings [mailto:sjenning@...ux.vnet.ibm.com]
> Subject: [PATCHv2 8/9] zswap: add to mm/
>
> zswap is a thin compression backend for frontswap. It receives
> pages from frontswap and attempts to store them in a compressed
> memory pool, resulting in an effective partial memory reclaim and
> dramatically reduced swap device I/O.
>
> Additional, in most cases, pages can be retrieved from this
> compressed store much more quickly than reading from tradition
> swap devices resulting in faster performance for many workloads.
>
> This patch adds the zswap driver to mm/
>
> Signed-off-by: Seth Jennings <sjenning@...ux.vnet.ibm.com>
I've implemented the equivalent of zswap_flush_*
in zcache. It looks much better than my earlier
attempt at similar code to move zpages to swap.
Nice work and thanks!
But... (isn't there always a "but";-)...
> +/*
> + * This limits is arbitrary for now until a better
> + * policy can be implemented. This is so we don't
> + * eat all of RAM decompressing pages for writeback.
> + */
> +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64
> + if (atomic_read(&zswap_outstanding_flushes) >
> + ZSWAP_MAX_OUTSTANDING_FLUSHES)
> + return;
>From what I can see, zcache is in some ways more aggressive in
some circumstances in "flushing" (zcache calls it "unuse"),
and in some ways less aggressive. But with significant exercise,
I can always cause the kernel to OOM when it is under heavy
memory pressure and the flush/unuse code is being used.
Have you given any further thought to "a better policy"
(see the comment in the snippet above)? I'm going
to try a smaller number than 64 to see if the OOMs
go away, but choosing a random number for this throttling
doesn't seem like a good plan for moving forward.
Thanks,
Dan
P.S. I know you, like I, often use something kernbench-ish to
exercise your code. I've found that compiling a kernel,
then switching to another kernel directory, doing a git pull,
and compiling that kernel, causes a lot of flushes/unuses
and the OOMs. (This with 1GB RAM booting RHEL6 with a full GUI.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists