[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191003090906.1261-1-dwagner@suse.de>
Date: Thu, 3 Oct 2019 11:09:06 +0200
From: Daniel Wagner <dwagner@...e.de>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Wagner <dwagner@...e.de>,
Uladzislau Rezki <urezki@...il.com>
Subject: [PATCH] mm: vmalloc: Use the vmap_area_lock to protect ne_fit_preload_node
Replace preempt_enable() and preempt_disable() with the vmap_area_lock
spin_lock instead. Calling spin_lock() with preempt disabled is
illegal for -rt. Furthermore, enabling preemption inside the
spin_lock() doesn't really make sense.
Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for
split purpose")
Cc: Uladzislau Rezki (Sony) <urezki@...il.com>
Signed-off-by: Daniel Wagner <dwagner@...e.de>
---
mm/vmalloc.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 08c134aa7ff3..0d1175673583 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1091,11 +1091,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
* Even if it fails we do not really care about that. Just proceed
* as it is. "overflow" path will refill the cache we allocate from.
*/
- preempt_disable();
+ spin_lock(&vmap_area_lock);
if (!__this_cpu_read(ne_fit_preload_node)) {
- preempt_enable();
+ spin_unlock(&vmap_area_lock);
pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
- preempt_disable();
+ spin_lock(&vmap_area_lock);
if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) {
if (pva)
@@ -1103,9 +1103,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
}
}
- spin_lock(&vmap_area_lock);
- preempt_enable();
-
/*
* If an allocation fails, the "vend" address is
* returned. Therefore trigger the overflow path.
--
2.16.4
Powered by blists - more mailing lists