[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160728064128.GA11208@hori1.linux.bs1.fc.nec.co.jp>
Date: Thu, 28 Jul 2016 06:41:28 +0000
From: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
To: Jia He <hejianet@...il.com>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
"Dave Hansen" <dave.hansen@...ux.intel.com>,
Paul Gortmaker <paul.gortmaker@...driver.com>
Subject: Re: [PATCH V2] mm/hugetlb: Avoid soft lockup in set_max_huge_pages()
On Thu, Jul 28, 2016 at 10:54:02AM +0800, Jia He wrote:
> In powerpc servers with large memory(32TB), we watched several soft
> lockups for hugepage under stress tests.
> The call trace are as follows:
> 1.
> get_page_from_freelist+0x2d8/0xd50
> __alloc_pages_nodemask+0x180/0xc20
> alloc_fresh_huge_page+0xb0/0x190
> set_max_huge_pages+0x164/0x3b0
>
> 2.
> prep_new_huge_page+0x5c/0x100
> alloc_fresh_huge_page+0xc8/0x190
> set_max_huge_pages+0x164/0x3b0
>
> This patch is to fix such soft lockups. It is safe to call cond_resched()
> there because it is out of spin_lock/unlock section.
>
> Signed-off-by: Jia He <hejianet@...il.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
> Cc: Mike Kravetz <mike.kravetz@...cle.com>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
> Cc: Paul Gortmaker <paul.gortmaker@...driver.com>
Looks good to me.
Reviewed-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
Thanks,
Naoya Horiguchi
>
> ---
> Changes in V2: move cond_resched to a common calling site in set_max_huge_pages
>
> mm/hugetlb.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index abc1c5f..9284280 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2216,6 +2216,10 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
> * and reducing the surplus.
> */
> spin_unlock(&hugetlb_lock);
> +
> + /* yield cpu to avoid soft lockup */
> + cond_resched();
> +
> if (hstate_is_gigantic(h))
> ret = alloc_fresh_gigantic_page(h, nodes_allowed);
> else
> --
> 2.5.0
>
Powered by blists - more mailing lists