[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ym++SI1ftbRg+9zK@casper.infradead.org>
Date: Mon, 2 May 2022 12:19:36 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Stephen Rothwell <sfr@...b.auug.org.au>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Next Mailing List <linux-next@...r.kernel.org>
Subject: Re: linux-next: build failure after merge of the mm tree
On Mon, May 02, 2022 at 08:49:03PM +1000, Stephen Rothwell wrote:
> Hi all,
>
> After merging the mm tree, today's linux-next build (arm
> multi_v7_defconfig) failed like this:
[... i wish our BUILD_BUGs produced nicer output from the compiler ...]
> Reverting the following commits makes the problem go away:
>
> 2b58b3f33ba2 ("mm/shmem: convert shmem_swapin_page() to shmem_swapin_folio()")
> 94cdf3e8c0bf ("mm/shmem: convert shmem_getpage_gfp to use a folio")
> 3674fd6cadf5 ("mm/shmem: convert shmem_alloc_and_acct_page to use a folio")
> b0bb08b2d5f3 ("mm/shmem: turn shmem_alloc_page() into shmem_alloc_folio()")
> 8d657a77c6fe ("mm/shmem: turn shmem_should_replace_page into shmem_should_replace_folio")
> 9a44f3462edc ("mm/shmem: convert shmem_add_to_page_cache to take a folio")
> 561fd8bee1dc ("mm/swap: add folio_throttle_swaprate")
> cb4e56ee240d ("mm/shmem: use a folio in shmem_unused_huge_shrink")
> 22bf1b68e572 ("vmscan: remove remaining uses of page in shrink_page_list")
> 7d15d41b7c4a ("vmscan: convert the activate_locked portion of shrink_page_list to folios")
> 8a6aff9c51c7 ("vmscan: move initialisation of mapping down")
> b79338b3d217 ("vmscan: convert lazy freeing to folios")
> 719426e40146 ("vmscan: convert page buffer handling to use folios")
> 339ba7502e13 ("vmscan: convert dirty page handling to folios")
Oops. allnoconfig on x86 reproduces the problem. This fixes it;
happy to go back and produce a new set of patches for Andrew to
preserve bisectability.
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 2999190adc22..e9e0d591061d 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -347,7 +347,6 @@ static inline void prep_transhuge_page(struct page *page) {}
static inline bool
can_split_folio(struct folio *folio, int *pextra_pins)
{
- BUILD_BUG();
return false;
}
static inline int
diff --git a/mm/shmem.c b/mm/shmem.c
index 673a0e783496..d62936ffe74d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -738,7 +738,7 @@ static int shmem_add_to_page_cache(struct folio *folio,
xas_store(&xas, folio);
if (xas_error(&xas))
goto unlock;
- if (folio_test_large(folio)) {
+ if (folio_test_pmd_mappable(folio)) {
count_vm_event(THP_FILE_ALLOC);
__lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr);
}
@@ -1887,10 +1887,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
goto unlock;
}
- if (folio_test_large(folio))
- hindex = round_down(index, HPAGE_PMD_NR);
- else
- hindex = index;
+ hindex = round_down(index, folio_nr_pages(folio));
if (sgp == SGP_WRITE)
__folio_set_referenced(folio);
@@ -1909,7 +1906,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
spin_unlock_irq(&info->lock);
alloced = true;
- if (folio_test_large(folio) &&
+ if (folio_test_pmd_mappable(folio) &&
DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) <
hindex + HPAGE_PMD_NR - 1) {
/*
Powered by blists - more mailing lists