[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20160818135938.565052402@linuxfoundation.org>
Date: Thu, 18 Aug 2016 15:59:25 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Jia He <hejianet@...il.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Michal Hocko <mhocko@...e.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 4.7 148/186] mm/hugetlb: avoid soft lockup in set_max_huge_pages()
4.7-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jia He <hejianet@...il.com>
commit 649920c6ab93429b94bc7c1aa7c0e8395351be32 upstream.
In powerpc servers with large memory(32TB), we watched several soft
lockups for hugepage under stress tests.
The call traces are as follows:
1.
get_page_from_freelist+0x2d8/0xd50
__alloc_pages_nodemask+0x180/0xc20
alloc_fresh_huge_page+0xb0/0x190
set_max_huge_pages+0x164/0x3b0
2.
prep_new_huge_page+0x5c/0x100
alloc_fresh_huge_page+0xc8/0x190
set_max_huge_pages+0x164/0x3b0
This patch fixes such soft lockups. It is safe to call cond_resched()
there because it is out of spin_lock/unlock section.
Link: http://lkml.kernel.org/r/1469674442-14848-1-git-send-email-hejianet@gmail.com
Signed-off-by: Jia He <hejianet@...il.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Acked-by: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Paul Gortmaker <paul.gortmaker@...driver.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/hugetlb.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2214,6 +2214,10 @@ static unsigned long set_max_huge_pages(
* and reducing the surplus.
*/
spin_unlock(&hugetlb_lock);
+
+ /* yield cpu to avoid soft lockup */
+ cond_resched();
+
if (hstate_is_gigantic(h))
ret = alloc_fresh_gigantic_page(h, nodes_allowed);
else
Powered by blists - more mailing lists