lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240724181916.31776-1-hailong.liu@oppo.com>
Date: Thu, 25 Jul 2024 02:19:12 +0800
From: <hailong.liu@...o.com>
To: Andrew Morton <akpm@...ux-foundation.org>, Uladzislau Rezki
	<urezki@...il.com>, Christoph Hellwig <hch@...radead.org>, Lorenzo Stoakes
	<lstoakes@...il.com>, Vlastimil Babka <vbabka@...e.cz>, Michal Hocko
	<mhocko@...e.com>
CC: Hailong.Liu <hailong.liu@...o.com>, Barry Song <21cnbao@...il.com>,
	"Tangquan . Zheng" <zhengtangquan@...o.com>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>
Subject: [RFC PATCH v1] mm/vmalloc: fix incorrect __vmap_pages_range_noflush() if vm_area_alloc_pages() from high order fallback to order0

From: "Hailong.Liu" <hailong.liu@...o.com>

The scenario where the issue occurs is as follows:
CONFIG: vmap_allow_huge = true && 2M is for PMD_SIZE
kvmalloc(2M)
    __vmalloc_node_range(vm_flags=VM_ALLOW_HUGE_VMAP)
        vm_area_alloc_pages(order=9) --->allocs order9 failed and fallback to order0
                                        and phys_addr is aligned with PMD_SIZE
            vmap_pages_range
                vmap_pages_range_noflush
                    __vmap_pages_range_noflush(page_shift = 21) ----> incorrect vmap *huge* here

Fix it by introducing VM_AREA_ALLOC_PAGES_FALLBACK in page->private if fallback to 0.
Fixes: e9c3cda4d86e ("mm, vmalloc: fix high order __GFP_NOFAIL allocations")

CC: Barry Song <21cnbao@...il.com>
Reported-by: Tangquan.Zheng <zhengtangquan@...o.com>
Signed-off-by: Hailong.Liu <hailong.liu@...o.com>
---
 mm/vmalloc.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 03c78fae06f3..b35dfd3eeee3 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -75,6 +75,8 @@ early_param("nohugevmalloc", set_nohugevmalloc);
 static const bool vmap_allow_huge = false;
 #endif	/* CONFIG_HAVE_ARCH_HUGE_VMALLOC */
 
+#define VM_AREA_ALLOC_PAGES_FALLBACK 0x1
+
 bool is_vmalloc_addr(const void *x)
 {
 	unsigned long addr = (unsigned long)kasan_reset_tag(x);
@@ -604,8 +606,13 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 	WARN_ON(page_shift < PAGE_SHIFT);
 
 	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) ||
-			page_shift == PAGE_SHIFT)
-		return vmap_small_pages_range_noflush(addr, end, prot, pages);
+			page_shift == PAGE_SHIFT ||
+			page_private(pages[0]) == VM_AREA_ALLOC_PAGES_FALLBACK) {
+		int ret = vmap_small_pages_range_noflush(addr, end, prot, pages);
+
+		set_page_private(pages[0], 0);
+		return ret;
+	}
 
 	for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) {
 		int err;
@@ -3583,6 +3590,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
 
 			/* fall back to the zero order allocations */
 			alloc_gfp |= __GFP_NOFAIL;
+			fallback = true;
 			order = 0;
 			continue;
 		}
@@ -3608,6 +3616,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
 		cond_resched();
 		nr_allocated += 1U << order;
 	}
+	if (nr_allocated && fallback)
+		set_page_private(pages[0], VM_AREA_ALLOC_PAGES_FALLBACK);
 
 	return nr_allocated;
 }
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ