[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YqO08Dsq8ZcAcWDQ@casper.infradead.org>
Date: Fri, 10 Jun 2022 22:17:36 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Sumanth Korikkar <sumanthk@...ux.ibm.com>
Cc: linux-ext4@...r.kernel.org, gerald.schaefer@...ux.ibm.com,
gor@...ux.ibm.com, agordeev@...ux.ibm.com,
linux-f2fs-devel@...ts.sourceforge.net,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-nilfs@...r.kernel.org
Subject: Re: [PATCH 06/10] hugetlbfs: Convert remove_inode_hugepages() to use
filemap_get_folios()
On Fri, Jun 10, 2022 at 05:52:05PM +0200, Sumanth Korikkar wrote:
> To reproduce:
> * clone libhugetlbfs:
> * Execute, PATH=$PATH:"obj64/" LD_LIBRARY_PATH=../obj64/ alloc-instantiate-race shared
... it's a lot harder to set up hugetlb than that ...
anyway, i figured it out without being able to run the reproducer.
Can you try this?
diff --git a/mm/filemap.c b/mm/filemap.c
index a30587f2e598..8ef861297ffb 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2160,7 +2160,11 @@ unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start,
if (xa_is_value(folio))
continue;
if (!folio_batch_add(fbatch, folio)) {
- *start = folio->index + folio_nr_pages(folio);
+ unsigned long nr = folio_nr_pages(folio);
+
+ if (folio_test_hugetlb(folio))
+ nr = 1;
+ *start = folio->index + nr;
goto out;
}
}
Powered by blists - more mailing lists