[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210415144414.326518538@linuxfoundation.org>
Date: Thu, 15 Apr 2021 16:47:19 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Hugh Dickins <hughd@...gle.com>,
Michal Hocko <mhocko@...e.com>,
David Rientjes <rientjes@...gle.com>,
Gerald Schaefer <gerald.schaefer@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Chen si <cici.cs@...baba-inc.com>,
Baoyou Xie <baoyou.xie@...yun.com>,
Wen Yang <wenyang@...ux.alibaba.com>,
Zijiang Huang <zijiang.hzj@...baba-inc.com>
Subject: [PATCH 4.9 27/47] mm: add cond_resched() in gather_pte_stats()
From: Hugh Dickins <hughd@...gle.com>
commit a66c0410b97c07a5708881198528ce724f7a3226 upstream.
The other pagetable walks in task_mmu.c have a cond_resched() after
walking their ptes: add a cond_resched() in gather_pte_stats() too, for
reading /proc/<id>/numa_maps. Only pagemap_pmd_range() has a
cond_resched() in its (unusually expensive) pmd_trans_huge case: more
should probably be added, but leave them unchanged for now.
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1612052157400.13021@eggly.anvils
Signed-off-by: Hugh Dickins <hughd@...gle.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Gerald Schaefer <gerald.schaefer@...ibm.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Reported-by: Chen si <cici.cs@...baba-inc.com>
Signed-off-by: Baoyou Xie <baoyou.xie@...yun.com>
Signed-off-by: Wen Yang <wenyang@...ux.alibaba.com>
Signed-off-by: Zijiang Huang <zijiang.hzj@...baba-inc.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
fs/proc/task_mmu.c | 1 +
1 file changed, 1 insertion(+)
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1609,6 +1609,7 @@ static int gather_pte_stats(pmd_t *pmd,
} while (pte++, addr += PAGE_SIZE, addr != end);
pte_unmap_unlock(orig_pte, ptl);
+ cond_resched();
return 0;
}
#ifdef CONFIG_HUGETLB_PAGE
Powered by blists - more mailing lists