[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131125180030.GA23396@cerebellum.variantweb.net>
Date: Mon, 25 Nov 2013 12:00:30 -0600
From: Seth Jennings <sjennings@...iantweb.net>
To: Dan Streetman <ddstreet@...e.org>
Cc: linux-mm@...ck.org, linux-kernel <linux-kernel@...r.kernel.org>,
Bob Liu <bob.liu@...cle.com>, Minchan Kim <minchan@...nel.org>,
Weijie Yang <weijie.yang@...sung.com>
Subject: Re: [PATCH v2] mm/zswap: change zswap to writethrough cache
On Fri, Nov 22, 2013 at 11:29:16AM -0600, Seth Jennings wrote:
> On Wed, Nov 20, 2013 at 02:49:33PM -0500, Dan Streetman wrote:
> > Currently, zswap is writeback cache; stored pages are not sent
> > to swap disk, and when zswap wants to evict old pages it must
> > first write them back to swap cache/disk manually. This avoids
> > swap out disk I/O up front, but only moves that disk I/O to
> > the writeback case (for pages that are evicted), and adds the
> > overhead of having to uncompress the evicted pages, and adds the
> > need for an additional free page (to store the uncompressed page)
> > at a time of likely high memory pressure. Additionally, being
> > writeback adds complexity to zswap by having to perform the
> > writeback on page eviction.
> >
> > This changes zswap to writethrough cache by enabling
> > frontswap_writethrough() before registering, so that any
> > successful page store will also be written to swap disk. All the
> > writeback code is removed since it is no longer needed, and the
> > only operation during a page eviction is now to remove the entry
> > from the tree and free it.
>
> I like it. It gets rid of a lot of nasty writeback code in zswap.
>
> I'll have to test before I ack, hopefully by the end of the day.
>
> Yes, this will increase writes to the swap device over the delayed
> writeback approach. I think it is a good thing though. I think it
> makes the difference between zswap and zram, both in operation and in
> application, more apparent. Zram is the better choice for embedded where
> write wear is a concern, and zswap being better if you need more
> flexibility to dynamically manage the compressed pool.
One thing I realized while doing my testing was that making zswap
writethrough also impacts synchronous reclaim. Zswap, as it is now,
makes the swapcache page clean during swap_writepage() which allows
shrink_page_list() to immediately reclaim it. Making zswap writethrough
eliminates this advantage and swapcache pages must be scanned again
before they can be reclaimed, as is the case with normal swapping.
Just something I am thinking about.
Seth
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists