[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20240630080255.1901-1-shaohaojize@126.com>
Date: Sun, 30 Jun 2024 16:02:55 +0800
From: shaohaojize@....com
To: akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org,
jiaoxupo@....com,
berlin@....com,
zhang.zhansheng@....com,
wang.yul@....com,
linux-mm@...ck.org,
"zhang.chun" <zhang.chuna@....com>
Subject: [PATCH] mm: fix kmap_high deadlock
From: "zhang.chun" <zhang.chuna@....com>
I work with zhangzhansheng(from h3c) find that some situation may casue
deadlock when we use kmap_highand kmap_XXX or kumap_xxx between
differt cores at the same time.kmap_high->map_new_virtual->
flush_all_zero_pkmaps->flush_tlb_kernel_range->on_each_cpu,
On this condition, kmap_high
hold the kmap_lock, wait the other cores ack. But the others
may disable irq and wait kmap_lock, this is some deadlock condition.
I think it's necessary to give kmap_lock before call
flush_tlb_kernel_range.
Like this:
spin_unlock(&kmap_lock);
flush_tlb_kernel_range(xxx);
spin_lock(&kmap_lock);
Signed-off-by: zhang.chun <zhang.chuna@....com>
---
mm/highmem.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/highmem.c b/mm/highmem.c
index bd48ba445dd4..12d20e551579 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -221,7 +221,9 @@ static void flush_all_zero_pkmaps(void)
need_flush = 1;
}
if (need_flush)
+ spin_unlock(&kmap_lock);
flush_tlb_kernel_range(PKMAP_ADDR(0), PKMAP_ADDR(LAST_PKMAP));
+ spin_lock(&kmap_lock);
}
void __kmap_flush_unused(void)
--
2.34.1
Powered by blists - more mailing lists