[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1lZ9Rm87GpFRM/Q@casper.infradead.org>
Date: Wed, 26 Oct 2022 17:01:57 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Dave Chinner <david@...morbit.com>
Cc: Zhaoyang Huang <huangzhaoyang@...il.com>,
"zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, ke.wang@...soc.com,
steve.kang@...soc.com, baocong.liu@...soc.com,
linux-fsdevel@...r.kernel.org
Subject: Re: [RFC PATCH] mm: move xa forward when run across zombie page
On Thu, Oct 20, 2022 at 10:52:14PM +0100, Matthew Wilcox wrote:
> But I think the tests you've done refute that theory. I'm all out of
> ideas at the moment.
I have a new idea. In page_cache_delete_batch(), we don't set the
order of the entry before calling xas_store(). That means we can end
up in a situation where we have an order-2 folio in the page cache,
delete it and end up with a NULL pointer at (say) index 20 and sibling
entries at indices 21-23. We can come along (potentially much later)
and put an order-0 folio back at index 20. Now all of indices 20-23
point to the index-20, order-0 folio. Worse, the xarray node can be
freed with the sibling entries still intact and then be reallocated by
an entirely different xarray.
I don't know if this is going to fix the problem you're seeing. I can't
quite draw a line from this situation to your symptoms. I came across
it while auditing all the places which set folio->mapping to NULL.
I did notice a mis-ordering; all the other places first remove the folio
from the xarray before setting folio to NULL, but I have a hard time
connecting that to your symptoms either.
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 44dd6d6e01bc..cc1fd1f849a7 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -1617,6 +1617,12 @@ static inline void xas_advance(struct xa_state *xas, unsigned long index)
xas->xa_offset = (index >> shift) & XA_CHUNK_MASK;
}
+static inline void xas_adjust_order(struct xa_state *xas, unsigned int order)
+{
+ xas->xa_shift = order - (order % XA_CHUNK_SHIFT);
+ xas->xa_sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
+}
+
/**
* xas_set_order() - Set up XArray operation state for a multislot entry.
* @xas: XArray operation state.
@@ -1628,8 +1634,7 @@ static inline void xas_set_order(struct xa_state *xas, unsigned long index,
{
#ifdef CONFIG_XARRAY_MULTI
xas->xa_index = order < BITS_PER_LONG ? (index >> order) << order : 0;
- xas->xa_shift = order - (order % XA_CHUNK_SHIFT);
- xas->xa_sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
+ xas_adjust_order(xas, order);
xas->xa_node = XAS_RESTART;
#else
BUG_ON(order > 0);
diff --git a/mm/filemap.c b/mm/filemap.c
index 08341616ae7a..6e3f486131e4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -305,11 +305,13 @@ static void page_cache_delete_batch(struct address_space *mapping,
WARN_ON_ONCE(!folio_test_locked(folio));
+ if (!folio_test_hugetlb(folio))
+ xas_adjust_order(&xas, folio_order(folio));
+ xas_store(&xas, NULL);
folio->mapping = NULL;
/* Leave folio->index set: truncation lookup relies on it */
i++;
- xas_store(&xas, NULL);
total_pages += folio_nr_pages(folio);
}
mapping->nrpages -= total_pages;
Powered by blists - more mailing lists