lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 28 Jun 2022 13:57:09 -0700
From:   "Darrick J. Wong" <djwong@...nel.org>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Dave Chinner <david@...morbit.com>, linux-xfs@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        Christoph Hellwig <hch@....de>, linux-mm@...ck.org
Subject: Re: Multi-page folio issues in 5.19-rc4 (was [PATCH v3 25/25] xfs:
 Support large folios)

On Tue, Jun 28, 2022 at 02:18:24PM +0100, Matthew Wilcox wrote:
> On Tue, Jun 28, 2022 at 12:31:55PM +0100, Matthew Wilcox wrote:
> > On Tue, Jun 28, 2022 at 12:27:40PM +0100, Matthew Wilcox wrote:
> > > On Tue, Jun 28, 2022 at 05:31:20PM +1000, Dave Chinner wrote:
> > > > So using this technique, I've discovered that there's a dirty page
> > > > accounting leak that eventually results in fsx hanging in
> > > > balance_dirty_pages().
> > > 
> > > Alas, I think this is only an accounting error, and not related to
> > > the problem(s) that Darrick & Zorro are seeing.  I think what you're
> > > seeing is dirty pages being dropped at truncation without the
> > > appropriate accounting.  ie this should be the fix:
> > 
> > Argh, try one that actually compiles.
> 
> ... that one's going to underflow the accounting.  Maybe I shouldn't
> be writing code at 6am?

I dunno, it's been running on my test VMs for 160 minutes (same debug
setup as yesterday) and 100 minutes (regular g/522, no -C/-I flags to
fsx, no debugging junk) and neither have reported corruptions.

$ grep Dirty /proc/meminfo
Dirty:               100 kB

So, pretty good for writing code at 6am while on holiday.  I'll let this
run overnight, but I think you've fixed the problem, at least for me...

--D

> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f7248002dad9..4eec6ee83e44 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -18,6 +18,7 @@
>  #include <linux/shrinker.h>
>  #include <linux/mm_inline.h>
>  #include <linux/swapops.h>
> +#include <linux/backing-dev.h>
>  #include <linux/dax.h>
>  #include <linux/khugepaged.h>
>  #include <linux/freezer.h>
> @@ -2439,11 +2440,15 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>  		__split_huge_page_tail(head, i, lruvec, list);
>  		/* Some pages can be beyond EOF: drop them from page cache */
>  		if (head[i].index >= end) {
> -			ClearPageDirty(head + i);
> -			__delete_from_page_cache(head + i, NULL);
> +			struct folio *tail = page_folio(head + i);
> +
>  			if (shmem_mapping(head->mapping))
>  				shmem_uncharge(head->mapping->host, 1);
> -			put_page(head + i);
> +			else if (folio_test_clear_dirty(tail))
> +				folio_account_cleaned(tail,
> +					inode_to_wb(folio->mapping->host));
> +			__filemap_remove_folio(tail, NULL);
> +			folio_put(tail);
>  		} else if (!PageAnon(page)) {
>  			__xa_store(&head->mapping->i_pages, head[i].index,
>  					head + i, 0);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ