[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b83b483-34d7-bf2a-a3ef-a40f2f4b0076@oracle.com>
Date: Fri, 15 Apr 2022 08:11:47 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Miaohe Lin <linmiaohe@...wei.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Yang Shi <shy828301@...il.com>,
Dan Carpenter <dan.carpenter@...cle.com>,
naoya.horiguchi@....com,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/3] mm, hugetlb, hwpoison: separate branch for free and
in-use hugepage
On 4/14/22 21:18, Naoya Horiguchi wrote:
> From: Naoya Horiguchi <naoya.horiguchi@....com>
>
> We know that HPageFreed pages should have page refcount 0, so
> get_page_unless_zero() always fails and returns 0. So explicitly separate
> the branch based on page state for minor optimization and better readability.
>
> Suggested-by: Mike Kravetz <mike.kravetz@...cle.com>
> Suggested-by: Miaohe Lin <linmiaohe@...wei.com>
> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@....com>
> ---
> mm/hugetlb.c | 4 +++-
> mm/memory-failure.c | 4 +++-
> 2 files changed, 6 insertions(+), 2 deletions(-)
Thank you!
Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
--
Mike Kravetz
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index e38cbfdf3e61..3638f166e554 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6786,7 +6786,9 @@ int get_hwpoison_huge_page(struct page *page, bool *hugetlb)
> spin_lock_irq(&hugetlb_lock);
> if (PageHeadHuge(page)) {
> *hugetlb = true;
> - if (HPageFreed(page) || HPageMigratable(page))
> + if (HPageFreed(page))
> + ret = 0;
> + else if (HPageMigratable(page))
> ret = get_page_unless_zero(page);
> else
> ret = -EBUSY;
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 5e3ad640f5bb..661079a37f29 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1517,7 +1517,9 @@ int __get_huge_page_for_hwpoison(unsigned long pfn, int flags)
> if (flags & MF_COUNT_INCREASED) {
> ret = 1;
> count_increased = true;
> - } else if (HPageFreed(head) || HPageMigratable(head)) {
> + } else if (HPageFreed(head)) {
> + ret = 0;
> + } else if (HPageMigratable(head)) {
> ret = get_page_unless_zero(head);
> if (ret)
> count_increased = true;
Powered by blists - more mailing lists