[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210908132727.16165-1-david@redhat.com>
Date: Wed, 8 Sep 2021 15:27:27 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: David Hildenbrand <david@...hat.com>,
Ping Fang <pifang@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...e.com>,
Oscar Salvador <osalvador@...e.de>, linux-mm@...ck.org
Subject: [PATCH v1] mm/vmalloc: fix exact allocations with an alignment > 1
find_vmap_lowest_match() is imprecise such that it won't always
find "the first free block ... that will accomplish the request" if
an alignment > 1 was specified: especially also when the alignment is
PAGE_SIZE. Unfortuantely, the way the vmalloc data structures were
designed, propagating the max size without alignment information through
the tree, it's hard to make it precise again when an alignment > 1 is
specified.
The issue is that in order to be able to eventually align later,
find_vmap_lowest_match() has to search for a slightly bigger area and
might skip some applicable subtrees just by lookign at the result of
get_subtree_max_size(). While this usually doesn't matter, it matters for
exact allocations as performed by KASAN when onlining a memory block,
when the free block exactly matches the request.
(mm/kasn/shadow.c:kasan_mem_notifier()).
In case we online memory blocks out of order (not lowest to highest
address), find_vmap_lowest_match() with PAGE_SIZE alignment will reject
an exact allocation if it corresponds exactly to a free block. (there are
some corner cases where it would still work, if we're lucky and hit the
first is_within_this_va() inside the while loop)
[root@...0 fedora]# echo online > /sys/devices/system/memory/memory82/state
[root@...0 fedora]# echo online > /sys/devices/system/memory/memory83/state
[root@...0 fedora]# echo online > /sys/devices/system/memory/memory85/state
[root@...0 fedora]# echo online > /sys/devices/system/memory/memory84/state
[ 223.858115] vmap allocation for size 16777216 failed: use vmalloc=<size> to increase size
[ 223.859415] bash: vmalloc: allocation failure: 16777216 bytes, mode:0x6000c0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=0
[ 223.860992] CPU: 4 PID: 1644 Comm: bash Kdump: loaded Not tainted 4.18.0-339.el8.x86_64+debug #1
[ 223.862149] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 223.863580] Call Trace:
[ 223.863946] dump_stack+0x8e/0xd0
[ 223.864420] warn_alloc.cold.90+0x8a/0x1b2
[ 223.864990] ? zone_watermark_ok_safe+0x300/0x300
[ 223.865626] ? slab_free_freelist_hook+0x85/0x1a0
[ 223.866264] ? __get_vm_area_node+0x240/0x2c0
[ 223.866858] ? kfree+0xdd/0x570
[ 223.867309] ? kmem_cache_alloc_node_trace+0x157/0x230
[ 223.868028] ? notifier_call_chain+0x90/0x160
[ 223.868625] __vmalloc_node_range+0x465/0x840
[ 223.869230] ? mark_held_locks+0xb7/0x120
While we could fix this in kasan_mem_notifier() by passing an alignment
of "1", this is actually an implementation detail of vmalloc and to be
handled internally.
So use an alignment of 1 when calling find_vmap_lowest_match() for exact
allocations that are expected to succeed -- if the given range can
satisfy the alignment requirements.
Fixes: 68ad4a330433 ("mm/vmalloc.c: keep track of free blocks for vmap allocation")
Reported-by: Ping Fang <pifang@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Uladzislau Rezki (Sony) <urezki@...il.com>
Cc: Roman Gushchin <guro@...com>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Oscar Salvador <osalvador@...e.de>
Cc: linux-mm@...ck.org
Signed-off-by: David Hildenbrand <david@...hat.com>
---
mm/vmalloc.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5cd52805149..c6071f5f8de3 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1153,7 +1153,8 @@ is_within_this_va(struct vmap_area *va, unsigned long size,
/*
* Find the first free block(lowest start address) in the tree,
* that will accomplish the request corresponding to passing
- * parameters.
+ * parameters. Note that with an alignment > 1, this function
+ * can be imprecise and skip applicable free blocks.
*/
static __always_inline struct vmap_area *
find_vmap_lowest_match(unsigned long size,
@@ -1396,7 +1397,15 @@ __alloc_vmap_area(unsigned long size, unsigned long align,
enum fit_type type;
int ret;
- va = find_vmap_lowest_match(size, align, vstart);
+ /*
+ * For exact allocations, ignore the alignment, such that
+ * find_vmap_lowest_match() won't search for a bigger free area just
+ * able to align later and consequently fail the search.
+ */
+ if (vend - vstart == size && IS_ALIGNED(vstart, align))
+ va = find_vmap_lowest_match(size, 1, vstart);
+ else
+ va = find_vmap_lowest_match(size, align, vstart);
if (unlikely(!va))
return vend;
base-commit: 7d2a07b769330c34b4deabeed939325c77a7ec2f
--
2.31.1
Powered by blists - more mailing lists