[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5797916B.2020008@gmail.com>
Date: Wed, 27 Jul 2016 00:35:55 +0800
From: hejianet <hejianet@...il.com>
To: Dave Hansen <dave.hansen@...ux.intel.com>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Paul Gortmaker <paul.gortmaker@...driver.com>
Subject: Re: [RFC PATCH] mm/hugetlb: Avoid soft lockup in set_max_huge_pages()
On 7/26/16 11:58 PM, Dave Hansen wrote:
> On 07/26/2016 08:44 AM, Jia He wrote:
>> This patch is to fix such soft lockup. I thouhgt it is safe to call
>> cond_resched() because alloc_fresh_gigantic_page and alloc_fresh_huge_page
>> are out of spin_lock/unlock section.
> Yikes. So the call site for both the things you patch is this:
>
>> while (count > persistent_huge_pages(h)) {
> ...
>> spin_unlock(&hugetlb_lock);
>> if (hstate_is_gigantic(h))
>> ret = alloc_fresh_gigantic_page(h, nodes_allowed);
>> else
>> ret = alloc_fresh_huge_page(h, nodes_allowed);
>> spin_lock(&hugetlb_lock);
> and you choose to patch both of the alloc_*() functions. Why not just
> fix it at the common call site? Seems like that
> spin_lock(&hugetlb_lock) could be a cond_resched_lock() which would fix
> both cases.
>
> Also, putting that cond_resched() inside the for_each_node*() loop is an
> odd choice. It seems to indicate that the loops can take a long time,
> which really isn't the case. The _loop_ isn't long, right?
Yes,thanks for the suggestions
Will send out V2 later
B.R.
Powered by blists - more mailing lists