[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161111060644.GA24342@bbox>
Date: Fri, 11 Nov 2016 15:06:44 +0900
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Hyeoncheol Lee <cheol.lee@....com>, <yjay.kim@....com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Hugh Dickins <hughd@...gle.com>,
"Darrick J . Wong" <darrick.wong@...cle.com>
Subject: Re: [PATCH] mm: support anonymous stable page
Sorry for sending a wrong version. Here is new one.
>From 2d42ead9335cde51fd58d6348439ca03cf359ba2 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@...nel.org>
Date: Fri, 11 Nov 2016 15:02:57 +0900
Subject: [PATCH] mm: support anonymous stable page
For developemnt for zram-swap asynchronous writeback, I found
strange corruption of compressed page. With investigation, it
reveals currently stable page doesn't support anonymous page.
IOW, reuse_swap_page can reuse the page without waiting
writeback completion so that it can corrupt data during
zram compression. It can affect every swap device which supports
asynchronous writeback and CRC checking as well as zRAM.
Unfortunately, reuse_swap_page should be atomic so that we
cannot wait on writeback in there so the approach in this patch
is simply return false if we found it needs stable page.
Although it increases memory footprint temporarily, it happens
rarely and it should be reclaimed easily althoug it happened.
Also, It would be better than waiting of IO completion, which
is critial path for application latency.
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Darrick J. Wong <darrick.wong@...cle.com>
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
mm/swapfile.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 2210de290b54..ea591435d8e0 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -943,11 +943,21 @@ bool reuse_swap_page(struct page *page, int *total_mapcount)
count = page_trans_huge_mapcount(page, total_mapcount);
if (count <= 1 && PageSwapCache(page)) {
count += page_swapcount(page);
- if (count == 1 && !PageWriteback(page)) {
+ if (count != 1)
+ goto out;
+ if (!PageWriteback(page)) {
delete_from_swap_cache(page);
SetPageDirty(page);
+ } else {
+ struct address_space *mapping;
+
+ mapping = page_mapping(page);
+ if (bdi_cap_stable_pages_required(
+ inode_to_bdi(mapping->host)))
+ return false;
}
}
+out:
return count <= 1;
}
@@ -2180,6 +2190,7 @@ static struct swap_info_struct *alloc_swap_info(void)
static int claim_swapfile(struct swap_info_struct *p, struct inode *inode)
{
int error;
+ struct address_space *swapper_space;
if (S_ISBLK(inode->i_mode)) {
p->bdev = bdgrab(I_BDEV(inode));
@@ -2202,6 +2213,9 @@ static int claim_swapfile(struct swap_info_struct *p, struct inode *inode)
} else
return -EINVAL;
+ swapper_space = &swapper_spaces[p->type];
+ swapper_space->host = inode;
+
return 0;
}
--
2.7.4
Powered by blists - more mailing lists