[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1469547868-9814-1-git-send-email-hejianet@gmail.com>
Date: Tue, 26 Jul 2016 23:44:28 +0800
From: Jia He <hejianet@...il.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Jia He <hejianet@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Paul Gortmaker <paul.gortmaker@...driver.com>
Subject: [RFC PATCH] mm/hugetlb: Avoid soft lockup in set_max_huge_pages()
In large memory(32TB) powerpc servers, we watched several soft lockup under
stress tests.
The call trace are as follows:
1.
get_page_from_freelist+0x2d8/0xd50
__alloc_pages_nodemask+0x180/0xc20
alloc_fresh_huge_page+0xb0/0x190
set_max_huge_pages+0x164/0x3b0
2.
prep_new_huge_page+0x5c/0x100
alloc_fresh_huge_page+0xc8/0x190
set_max_huge_pages+0x164/0x3b0
This patch is to fix such soft lockup. I thouhgt it is safe to call
cond_resched() because alloc_fresh_gigantic_page and alloc_fresh_huge_page
are out of spin_lock/unlock section.
Signed-off-by: Jia He <hejianet@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@...driver.com>
---
mm/hugetlb.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index addfe4ac..d51759d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1146,6 +1146,10 @@ static int alloc_fresh_gigantic_page(struct hstate *h,
for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) {
page = alloc_fresh_gigantic_page_node(h, node);
+
+ /* yield cpu */
+ cond_resched();
+
if (page)
return 1;
}
@@ -1381,6 +1385,10 @@ static int alloc_fresh_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) {
page = alloc_fresh_huge_page_node(h, node);
+
+ /* yield cpu */
+ cond_resched();
+
if (page) {
ret = 1;
break;
--
2.5.0
Powered by blists - more mailing lists