[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20190928113456.152742cf@bigdell>
Date: Sat, 28 Sep 2019 11:34:56 +0200
From: Vitaly Wool <vitalywool@...il.com>
To: Linux-MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org,
Markus Linnala <markus.linnala@...il.com>,
Dan Streetman <ddstreet@...e.org>,
Vlastimil Babka <vbabka@...e.cz>,
Stable <stable@...r.kernel.org>
Subject: [PATCH v2] z3fold: claim page in the beginning of free
There's a really hard to reproduce race in z3fold between
z3fold_free() and z3fold_reclaim_page(). z3fold_reclaim_page()
can claim the page after z3fold_free() has checked if the page
was claimed and z3fold_free() will then schedule this page for
compaction which may in turn lead to random page faults (since
that page would have been reclaimed by then). Fix that by
claiming page in the beginning of z3fold_free() and not
forgetting to clear the claim in the end.
Reported-by: Markus Linnala <markus.linnala@...il.com>
Signed-off-by: Vitaly Wool <vitalywool@...il.com>
Cc: <stable@...r.kernel.org>
---
mm/z3fold.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 05bdf90646e7..6d3d3f698ebb 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -998,9 +998,11 @@ static void z3fold_free(struct z3fold_pool *pool,
unsigned long handle) struct z3fold_header *zhdr;
struct page *page;
enum buddy bud;
+ bool page_claimed;
zhdr = handle_to_z3fold_header(handle);
page = virt_to_page(zhdr);
+ page_claimed = test_and_set_bit(PAGE_CLAIMED, &page->private);
if (test_bit(PAGE_HEADLESS, &page->private)) {
/* if a headless page is under reclaim, just leave.
@@ -1008,7 +1010,7 @@ static void z3fold_free(struct z3fold_pool *pool,
unsigned long handle)
* has not been set before, we release this page
* immediately so we don't care about its value any
more. */
- if (!test_and_set_bit(PAGE_CLAIMED, &page->private)) {
+ if (!page_claimed) {
spin_lock(&pool->lock);
list_del(&page->lru);
spin_unlock(&pool->lock);
@@ -1044,13 +1046,15 @@ static void z3fold_free(struct z3fold_pool
*pool, unsigned long handle) atomic64_dec(&pool->pages_nr);
return;
}
- if (test_bit(PAGE_CLAIMED, &page->private)) {
+ if (page_claimed) {
+ /* the page has not been claimed by us */
z3fold_page_unlock(zhdr);
return;
}
if (unlikely(PageIsolated(page)) ||
test_and_set_bit(NEEDS_COMPACTING, &page->private)) {
z3fold_page_unlock(zhdr);
+ clear_bit(PAGE_CLAIMED, &page->private);
return;
}
if (zhdr->cpu < 0 || !cpu_online(zhdr->cpu)) {
@@ -1060,10 +1064,12 @@ static void z3fold_free(struct z3fold_pool
*pool, unsigned long handle) zhdr->cpu = -1;
kref_get(&zhdr->refcount);
do_compact_page(zhdr, true);
+ clear_bit(PAGE_CLAIMED, &page->private);
return;
}
kref_get(&zhdr->refcount);
queue_work_on(zhdr->cpu, pool->compact_wq, &zhdr->work);
+ clear_bit(PAGE_CLAIMED, &page->private);
z3fold_page_unlock(zhdr);
}
--
2.17.1
Powered by blists - more mailing lists