[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230130130739.563628-1-arnd@kernel.org>
Date: Mon, 30 Jan 2023 14:07:26 +0100
From: Arnd Bergmann <arnd@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Pavel Tatashin <pavel.tatashin@...rosoft.com>,
Alexander Potapenko <glider@...gle.com>
Cc: Arnd Bergmann <arnd@...db.de>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
John Hubbard <jhubbard@...dia.com>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Hugh Dickins <hughd@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Alex Sierra <alex.sierra@....com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] mm: extend max struct page size for kmsan
From: Arnd Bergmann <arnd@...db.de>
After x86 has enabled support for KMSAN, it has become possible
to have larger 'struct page' than was expected when commit
5470dea49f53 ("mm: use mm_zero_struct_page from SPARC on all 64b
architectures") was merged:
include/linux/mm.h:156:10: warning: no case matching constant switch condition '96'
switch (sizeof(struct page)) {
Extend the maximum accordingly.
Fixes: 5470dea49f53 ("mm: use mm_zero_struct_page from SPARC on all 64b architectures")
Fixes: 4ca8cc8d1bbe ("x86: kmsan: enable KMSAN builds for x86")
Signed-off-by: Arnd Bergmann <arnd@...db.de>
---
This seems to show up extremely rarely in randconfig builds, but
enough to trigger my build machine.
I saw a related discussion at [1] about raising MAX_STRUCT_PAGE_SIZE,
but as I understand it, that needs to be addressed separately.
[1] https://lore.kernel.org/lkml/20220701142310.2188015-11-glider@google.com/
---
include/linux/mm.h | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b73ba2e5cfd2..aa39d5ddace1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -137,7 +137,7 @@ extern int mmap_rnd_compat_bits __read_mostly;
* define their own version of this macro in <asm/pgtable.h>
*/
#if BITS_PER_LONG == 64
-/* This function must be updated when the size of struct page grows above 80
+/* This function must be updated when the size of struct page grows above 96
* or reduces below 56. The idea that compiler optimizes out switch()
* statement, and only leaves move/store instructions. Also the compiler can
* combine write statements if they are both assignments and can be reordered,
@@ -148,12 +148,18 @@ static inline void __mm_zero_struct_page(struct page *page)
{
unsigned long *_pp = (void *)page;
- /* Check that struct page is either 56, 64, 72, or 80 bytes */
+ /* Check that struct page is either 56, 64, 72, 80, 88 or 96 bytes */
BUILD_BUG_ON(sizeof(struct page) & 7);
BUILD_BUG_ON(sizeof(struct page) < 56);
- BUILD_BUG_ON(sizeof(struct page) > 80);
+ BUILD_BUG_ON(sizeof(struct page) > 96);
switch (sizeof(struct page)) {
+ case 96:
+ _pp[11] = 0;
+ fallthrough;
+ case 88:
+ _pp[10] = 0;
+ fallthrough;
case 80:
_pp[9] = 0;
fallthrough;
--
2.39.0
Powered by blists - more mailing lists