[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8b40c33f-1bdf-2cda-5948-cf433302514e@oracle.com>
Date: Wed, 16 Mar 2022 15:31:57 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Miaohe Lin <linmiaohe@...wei.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: HORIGUCHI NAOYA <naoya.horiguchi@....com>,
Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
stable@...r.kernel.org
Subject: Re: [PATCH] hugetlb: do not demote poisoned hugetlb pages
On 3/8/22 05:43, Miaohe Lin wrote:
> On 2022/3/8 5:57, Mike Kravetz wrote:
>> It is possible for poisoned hugetlb pages to reside on the free lists.
>> The huge page allocation routines which dequeue entries from the free
>> lists make a point of avoiding poisoned pages. There is no such check
>> and avoidance in the demote code path.
>>
>> If a hugetlb page on the is on a free list, poison will only be set in
>> the head page rather then the page with the actual error. If such a
>> page is demoted, then the poison flag may follow the wrong page. A page
>> without error could have poison set, and a page with poison could not
>> have the flag set.
>>
>> Check for poison before attempting to demote a hugetlb page. Also,
>> return -EBUSY to the caller if only poisoned pages are on the free list.
>>
>> Fixes: 8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>> Cc: <stable@...r.kernel.org>
>> ---
>> mm/hugetlb.c | 17 ++++++++++-------
>> 1 file changed, 10 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index b34f50156f7e..f8ca7cca3c1a 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -3475,7 +3475,6 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>> {
>> int nr_nodes, node;
>> struct page *page;
>> - int rc = 0;
>>
>> lockdep_assert_held(&hugetlb_lock);
>>
>> @@ -3486,15 +3485,19 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>> }
>>
>> for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
>> - if (!list_empty(&h->hugepage_freelists[node])) {
>> - page = list_entry(h->hugepage_freelists[node].next,
>> - struct page, lru);
>> - rc = demote_free_huge_page(h, page);
>> - break;
>> + list_for_each_entry(page, &h->hugepage_freelists[node], lru) {
>> + if (PageHWPoison(page))
>> + continue;
>> +
>> + return demote_free_huge_page(h, page);
>
> It seems this patch is not ideal. Memory failure can hit the hugetlb page anytime without
> holding the hugetlb_lock. So the page might become HWPoison just after the check. But this
> patch should have handled the common case. Many thanks for your work. :)
>
Correct, this patch handles the common case of not demoting a hugetlb
page if HWPoison is set. This is similar to code in the dequeue path
used when allocating a huge page for allocation use.
As you point out, work still needs to be done to better coordinate
memory failure with demote as well as huge page freeing. As you know
Naoya is working on this now. It is unclear if that work will be limited
to memory error handling code, or if greater coordination with hugetlb
code will be required.
Unless you have objections, I believe this patch should move forward and
be backported to stable trees. If we determine that more coordination
between memory error and hugetlb code is needed, that can be added later.
--
Mike Kravetz
Powered by blists - more mailing lists