[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220302013825.2290315-1-wangjianxing@loongson.cn>
Date: Tue, 1 Mar 2022 20:38:25 -0500
From: wangjianxing <wangjianxing@...ngson.cn>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
wangjianxing <wangjianxing@...ngson.cn>
Subject: [PATCH 1/1] mm/page_alloc: add scheduling point to free_unref_page_list
free a large list of pages maybe cause rcu_sched starved on
non-preemptible kernels
rcu: rcu_sched kthread starved for 5359 jiffies! g454793 f0x0
RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=19
[...]
Call Trace:
free_unref_page_list+0x19c/0x270
release_pages+0x3cc/0x498
tlb_flush_mmu_free+0x44/0x70
zap_pte_range+0x450/0x738
unmap_page_range+0x108/0x240
unmap_vmas+0x74/0xf0
unmap_region+0xb0/0x120
do_munmap+0x264/0x438
vm_munmap+0x58/0xa0
sys_munmap+0x10/0x20
syscall_common+0x24/0x38
Signed-off-by: wangjianxing <wangjianxing@...ngson.cn>
---
mm/page_alloc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3589febc6..1b96421c8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3479,6 +3479,9 @@ void free_unref_page_list(struct list_head *list)
*/
if (++batch_count == SWAP_CLUSTER_MAX) {
local_unlock_irqrestore(&pagesets.lock, flags);
+
+ cond_resched();
+
batch_count = 0;
local_lock_irqsave(&pagesets.lock, flags);
}
--
2.27.0
Powered by blists - more mailing lists