[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220628221757.GJ227878@dread.disaster.area>
Date: Wed, 29 Jun 2022 08:17:57 +1000
From: Dave Chinner <david@...morbit.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: "Darrick J. Wong" <djwong@...nel.org>, linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
Christoph Hellwig <hch@....de>, linux-mm@...ck.org
Subject: Re: Multi-page folio issues in 5.19-rc4 (was [PATCH v3 25/25] xfs:
Support large folios)
On Tue, Jun 28, 2022 at 02:18:24PM +0100, Matthew Wilcox wrote:
> On Tue, Jun 28, 2022 at 12:31:55PM +0100, Matthew Wilcox wrote:
> > On Tue, Jun 28, 2022 at 12:27:40PM +0100, Matthew Wilcox wrote:
> > > On Tue, Jun 28, 2022 at 05:31:20PM +1000, Dave Chinner wrote:
> > > > So using this technique, I've discovered that there's a dirty page
> > > > accounting leak that eventually results in fsx hanging in
> > > > balance_dirty_pages().
> > >
> > > Alas, I think this is only an accounting error, and not related to
> > > the problem(s) that Darrick & Zorro are seeing. I think what you're
> > > seeing is dirty pages being dropped at truncation without the
> > > appropriate accounting. ie this should be the fix:
> >
> > Argh, try one that actually compiles.
>
> ... that one's going to underflow the accounting. Maybe I shouldn't
> be writing code at 6am?
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f7248002dad9..4eec6ee83e44 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -18,6 +18,7 @@
> #include <linux/shrinker.h>
> #include <linux/mm_inline.h>
> #include <linux/swapops.h>
> +#include <linux/backing-dev.h>
> #include <linux/dax.h>
> #include <linux/khugepaged.h>
> #include <linux/freezer.h>
> @@ -2439,11 +2440,15 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> __split_huge_page_tail(head, i, lruvec, list);
> /* Some pages can be beyond EOF: drop them from page cache */
> if (head[i].index >= end) {
> - ClearPageDirty(head + i);
> - __delete_from_page_cache(head + i, NULL);
> + struct folio *tail = page_folio(head + i);
> +
> if (shmem_mapping(head->mapping))
> shmem_uncharge(head->mapping->host, 1);
> - put_page(head + i);
> + else if (folio_test_clear_dirty(tail))
> + folio_account_cleaned(tail,
> + inode_to_wb(folio->mapping->host));
> + __filemap_remove_folio(tail, NULL);
> + folio_put(tail);
> } else if (!PageAnon(page)) {
> __xa_store(&head->mapping->i_pages, head[i].index,
> head + i, 0);
>
Yup, that fixes the leak.
Tested-by: Dave Chinner <dchinner@...hat.com>
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists