lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 9 Sep 2021 11:20:07 +0800 From: Liu Yuntao <liuyuntao10@...wei.com> To: <kirill@...temov.name> CC: <akpm@...ux-foundation.org>, <hughd@...gle.com>, <kirill.shutemov@...ux.intel.com>, <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>, <liusirui@...wei.com>, <liuyuntao10@...wei.com>, <windspectator@...il.com>, <wuxu.wu@...wei.com> Subject: [PATCH v2] fix judgment error in shmem_is_huge() In the case of SHMEM_HUGE_WITHIN_SIZE, the page index is not rounded up correctly. When the page index points to the first page in a huge page, round_up() cannot bring it to the end of the huge page, but to the end of the previous one. an example: HPAGE_PMD_NR on my machine is 512(2 MB huge page size). After allcoating a 3000 KB buffer, I access it at location 2050 KB. In shmem_is_huge(), the corresponding index happens to be 512. After rounded up by HPAGE_PMD_NR, it will still be 512 which is smaller than i_size, and shmem_is_huge() will return true. As a result, my buffer takes an additional huge page, and that shouldn't happen when shmem_enabled is set to within_size. Fixes: f3f0e1d2150b2b ("khugepaged: add support of collapse for tmpfs/shmem pages") Signed-off-by: Liu Yuntao <liuyuntao10@...wei.com> --- V1->V2: add simplification of the condition after round_up() --- mm/shmem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 88742953532c..b5860f4a2738 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -490,9 +490,9 @@ bool shmem_is_huge(struct vm_area_struct *vma, case SHMEM_HUGE_ALWAYS: return true; case SHMEM_HUGE_WITHIN_SIZE: - index = round_up(index, HPAGE_PMD_NR); + index = round_up(index + 1, HPAGE_PMD_NR); i_size = round_up(i_size_read(inode), PAGE_SIZE); - if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index) + if (i_size >> PAGE_SHIFT >= index) return true; fallthrough; case SHMEM_HUGE_ADVISE: -- 2.23.0
Powered by blists - more mailing lists