[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <531c27d3-3d12-25f1-b461-48240cf8568f@huawei.com>
Date: Wed, 6 Jul 2022 11:07:57 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
CC: Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Liu Shixin <liushixin2@...wei.com>,
Yang Shi <shy828301@...il.com>,
Oscar Salvador <osalvador@...e.de>,
Muchun Song <songmuchun@...edance.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [mm-unstable PATCH v4 3/9] mm/hugetlb: make pud_huge() and
follow_huge_pud() aware of non-present pud entry
On 2022/7/5 17:04, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Tue, Jul 05, 2022 at 10:46:09AM +0800, Miaohe Lin wrote:
>> On 2022/7/4 9:33, Naoya Horiguchi wrote:
>>> From: Naoya Horiguchi <naoya.horiguchi@....com>
>>>
>>> follow_pud_mask() does not support non-present pud entry now. As long as
>>> I tested on x86_64 server, follow_pud_mask() still simply returns
>>> no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
>>> user-visible effect should happen. But generally we should call
>>> follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
>>>
>>> Update pud_huge() and follow_huge_pud() to handle non-present pud entries.
>>> The changes are similar to previous works for pud entries commit e66f17ff7177
>>> ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
>>> cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").
>>>
>>> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@....com>
>>> ---
>>> v2 -> v3:
>>> - fixed typos in subject and description,
>>> - added comment on pud_huge(),
>>> - added comment about fallback for hwpoisoned entry,
>>> - updated initial check about FOLL_{PIN,GET} flags.
>>> ---
>>> arch/x86/mm/hugetlbpage.c | 8 +++++++-
>>> mm/hugetlb.c | 32 ++++++++++++++++++++++++++++++--
>>> 2 files changed, 37 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
>>> index 509408da0da1..6b3033845c6d 100644
>>> --- a/arch/x86/mm/hugetlbpage.c
>>> +++ b/arch/x86/mm/hugetlbpage.c
>>> @@ -30,9 +30,15 @@ int pmd_huge(pmd_t pmd)
>>> (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
>>> }
>>>
>>> +/*
>>> + * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal
>>> + * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry.
>>> + * Otherwise, returns 0.
>>> + */
>>> int pud_huge(pud_t pud)
>>> {
>>> - return !!(pud_val(pud) & _PAGE_PSE);
>>> + return !pud_none(pud) &&
>>> + (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
>>> }
>>
>> Question: Is aarch64 supported too? It seems aarch64 version of pud_huge matches
>> the requirement naturally for me.
>
> I think that if pmd_huge() and pud_huge() return true for non-present
> pmd/pud entries, that's OK. Otherwise we need update to support the
> new feature.
>
> In aarch64, the bits in pte/pmd/pud related to {pmd,pud}_present() and
> {pmd,pud}_huge() seem not to overlap with the bit range for swap type
> and swap offset, so maybe that's fine. But I recommend to test with
> arm64 if you have access to aarch64 servers.
I see. This series is intended to enable 1GB hugepage support on x86. And if
someone wants to use it in other arches, it's better to have a test first. ;)
Thanks.
>
>>
>> Anyway, this patch looks good to me.
>>
>> Reviewed-by: Miaohe Lin <linmiaohe@...wei.com>
>
> Thank you for reviewing.
>
> - Naoya Horiguchi
>
Powered by blists - more mailing lists