[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240109091511.8299-1-jianfeng.w.wang@oracle.com>
Date: Tue, 9 Jan 2024 01:15:11 -0800
From: Jianfeng Wang <jianfeng.w.wang@...cle.com>
To: akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Jianfeng Wang <jianfeng.w.wang@...cle.com>
Subject: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()
The oom_reaper tries to reclaim additional memory owned by the oom
victim. In __oom_reap_task_mm(), it uses mmu_gather for batched page
free. After oom_reaper was added, mmu_gather feature introduced
CONFIG_MMU_GATHER_NO_GATHER (in 'commit 952a31c9e6fa ("asm-generic/tlb:
Introduce CONFIG_HAVE_MMU_GATHER_NO_GATHER=y")', an option to skip batched
page free. If set, tlb_batch_pages_flush(), which is responsible for
calling lru_add_drain(), is skipped during tlb_finish_mmu(). Without it,
pages could still be held by per-cpu fbatches rather than be freed.
This fix adds lru_add_drain() prior to mmu_gather. This makes the code
consistent with other cases where mmu_gather is used for freeing pages.
Signed-off-by: Jianfeng Wang <jianfeng.w.wang@...cle.com>
---
mm/oom_kill.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 9e6071fde34a..e2fcf4f062ea 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -537,6 +537,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
struct mmu_notifier_range range;
struct mmu_gather tlb;
+ lru_add_drain();
mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0,
mm, vma->vm_start,
vma->vm_end);
--
2.42.1
Powered by blists - more mailing lists