[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220913140355.826614561@linuxfoundation.org>
Date: Tue, 13 Sep 2022 16:06:22 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Alexander Gordeev <agordeev@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>
Subject: [PATCH 5.4 051/108] s390/hugetlb: fix prepare_hugepage_range() check for 2 GB hugepages
From: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
commit 7c8d42fdf1a84b1a0dd60d6528309c8ec127e87c upstream.
The alignment check in prepare_hugepage_range() is wrong for 2 GB
hugepages, it only checks for 1 MB hugepage alignment.
This can result in kernel crash in __unmap_hugepage_range() at the
BUG_ON(start & ~huge_page_mask(h)) alignment check, for mappings
created with MAP_FIXED at unaligned address.
Fix this by correctly handling multiple hugepage sizes, similar to the
generic version of prepare_hugepage_range().
Fixes: d08de8e2d867 ("s390/mm: add support for 2GB hugepages")
Cc: <stable@...r.kernel.org> # 4.8+
Acked-by: Alexander Gordeev <agordeev@...ux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@...ux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/s390/include/asm/hugetlb.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/arch/s390/include/asm/hugetlb.h
+++ b/arch/s390/include/asm/hugetlb.h
@@ -35,9 +35,11 @@ static inline bool is_hugepage_only_rang
static inline int prepare_hugepage_range(struct file *file,
unsigned long addr, unsigned long len)
{
- if (len & ~HPAGE_MASK)
+ struct hstate *h = hstate_file(file);
+
+ if (len & ~huge_page_mask(h))
return -EINVAL;
- if (addr & ~HPAGE_MASK)
+ if (addr & ~huge_page_mask(h))
return -EINVAL;
return 0;
}
Powered by blists - more mailing lists