[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220906132834.852956422@linuxfoundation.org>
Date: Tue, 6 Sep 2022 15:31:16 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Alexander Gordeev <agordeev@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>
Subject: [PATCH 5.19 128/155] s390/hugetlb: fix prepare_hugepage_range() check for 2 GB hugepages
From: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
commit 7c8d42fdf1a84b1a0dd60d6528309c8ec127e87c upstream.
The alignment check in prepare_hugepage_range() is wrong for 2 GB
hugepages, it only checks for 1 MB hugepage alignment.
This can result in kernel crash in __unmap_hugepage_range() at the
BUG_ON(start & ~huge_page_mask(h)) alignment check, for mappings
created with MAP_FIXED at unaligned address.
Fix this by correctly handling multiple hugepage sizes, similar to the
generic version of prepare_hugepage_range().
Fixes: d08de8e2d867 ("s390/mm: add support for 2GB hugepages")
Cc: <stable@...r.kernel.org> # 4.8+
Acked-by: Alexander Gordeev <agordeev@...ux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@...ux.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/s390/include/asm/hugetlb.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/arch/s390/include/asm/hugetlb.h
+++ b/arch/s390/include/asm/hugetlb.h
@@ -28,9 +28,11 @@ pte_t huge_ptep_get_and_clear(struct mm_
static inline int prepare_hugepage_range(struct file *file,
unsigned long addr, unsigned long len)
{
- if (len & ~HPAGE_MASK)
+ struct hstate *h = hstate_file(file);
+
+ if (len & ~huge_page_mask(h))
return -EINVAL;
- if (addr & ~HPAGE_MASK)
+ if (addr & ~huge_page_mask(h))
return -EINVAL;
return 0;
}
Powered by blists - more mailing lists