[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1479082460.197683798@decadent.org.uk>
Date: Mon, 14 Nov 2016 00:14:20 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, "Michal Hocko" <mhocko@...e.com>,
"Dave Hansen" <dave.hansen@...ux.intel.com>,
"Linus Torvalds" <torvalds@...ux-foundation.org>,
"Naoya Horiguchi" <n-horiguchi@...jp.nec.com>,
"Paul Gortmaker" <paul.gortmaker@...driver.com>,
"Mike Kravetz" <mike.kravetz@...cle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
"Jia He" <hejianet@...il.com>
Subject: [PATCH 3.16 142/346] mm/hugetlb: avoid soft lockup in
set_max_huge_pages()
3.16.39-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Jia He <hejianet@...il.com>
commit 649920c6ab93429b94bc7c1aa7c0e8395351be32 upstream.
In powerpc servers with large memory(32TB), we watched several soft
lockups for hugepage under stress tests.
The call traces are as follows:
1.
get_page_from_freelist+0x2d8/0xd50
__alloc_pages_nodemask+0x180/0xc20
alloc_fresh_huge_page+0xb0/0x190
set_max_huge_pages+0x164/0x3b0
2.
prep_new_huge_page+0x5c/0x100
alloc_fresh_huge_page+0xc8/0x190
set_max_huge_pages+0x164/0x3b0
This patch fixes such soft lockups. It is safe to call cond_resched()
there because it is out of spin_lock/unlock section.
Link: http://lkml.kernel.org/r/1469674442-14848-1-git-send-email-hejianet@gmail.com
Signed-off-by: Jia He <hejianet@...il.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Acked-by: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@...driver.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
mm/hugetlb.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1655,6 +1655,10 @@ static unsigned long set_max_huge_pages(
* and reducing the surplus.
*/
spin_unlock(&hugetlb_lock);
+
+ /* yield cpu to avoid soft lockup */
+ cond_resched();
+
if (hstate_is_gigantic(h))
ret = alloc_fresh_gigantic_page(h, nodes_allowed);
else
Powered by blists - more mailing lists