[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bb2e9a22-5850-470d-95ae-d04d171a484f@redhat.com>
Date: Wed, 4 Sep 2024 20:35:31 +0200
From: David Hildenbrand <david@...hat.com>
To: Yang Shi <yang@...amperecomputing.com>, catalin.marinas@....com,
will@...nel.org, muchun.song@...ux.dev, akpm@...ux-foundation.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH 1/2] hugetlb: arm64: add mte support
On 04.09.24 19:57, Yang Shi wrote:
>
>
> On 9/3/24 2:35 PM, David Hildenbrand wrote:
>> On 03.09.24 18:46, Yang Shi wrote:
>>>
>>>
>>> On 9/2/24 7:33 AM, David Hildenbrand wrote:
>>>> On 21.08.24 20:47, Yang Shi wrote:
>>>>> Enable MTE support for hugetlb.
>>>>>
>>>>> The MTE page flags will be set on the head page only. When copying
>>>>> hugetlb folio, the tags for all tail pages will be copied when copying
>>>>> head page.
>>>>>
>>>>> When freeing hugetlb folio, the MTE flags will be cleared.
>>>>>
>>>>> Signed-off-by: Yang Shi <yang@...amperecomputing.com>
>>>>> ---
>>>>> arch/arm64/include/asm/hugetlb.h | 11 ++++++++++-
>>>>> arch/arm64/include/asm/mman.h | 3 ++-
>>>>> arch/arm64/kernel/hibernate.c | 7 +++++++
>>>>> arch/arm64/kernel/mte.c | 25 +++++++++++++++++++++++--
>>>>> arch/arm64/kvm/guest.c | 16 +++++++++++++---
>>>>> arch/arm64/kvm/mmu.c | 11 +++++++++++
>>>>> arch/arm64/mm/copypage.c | 25 +++++++++++++++++++++++--
>>>>> fs/hugetlbfs/inode.c | 2 +-
>>>>> 8 files changed, 90 insertions(+), 10 deletions(-)
>>>>>
>>>>> v2: * Reimplemented the patch to fix the comments from Catalin.
>>>>> * Added test cases (patch #2) per Catalin.
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/hugetlb.h
>>>>> b/arch/arm64/include/asm/hugetlb.h
>>>>> index 293f880865e8..00a1f75d40ee 100644
>>>>> --- a/arch/arm64/include/asm/hugetlb.h
>>>>> +++ b/arch/arm64/include/asm/hugetlb.h
>>>>> @@ -11,6 +11,7 @@
>>>>> #define __ASM_HUGETLB_H
>>>>> #include <asm/cacheflush.h>
>>>>> +#include <asm/mte.h>
>>>>> #include <asm/page.h>
>>>>> #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
>>>>> @@ -20,7 +21,15 @@ extern bool
>>>>> arch_hugetlb_migration_supported(struct hstate *h);
>>>>> static inline void arch_clear_hugetlb_flags(struct folio *folio)
>>>>> {
>>>>> - clear_bit(PG_dcache_clean, &folio->flags);
>>>>> + const unsigned long clear_flags = BIT(PG_dcache_clean) |
>>>>> + BIT(PG_mte_tagged) | BIT(PG_mte_lock);
>>>>> +
>>>>> + if (!system_supports_mte()) {
>>>>> + clear_bit(PG_dcache_clean, &folio->flags);
>>>>> + return;
>>>>> + }
>>>>> +
>>>>> + folio->flags &= ~clear_flags;
>>>>> }
>>>>> #define arch_clear_hugetlb_flags arch_clear_hugetlb_flags
>>>>> diff --git a/arch/arm64/include/asm/mman.h
>>>>> b/arch/arm64/include/asm/mman.h
>>>>> index 5966ee4a6154..304dfc499e68 100644
>>>>> --- a/arch/arm64/include/asm/mman.h
>>>>> +++ b/arch/arm64/include/asm/mman.h
>>>>> @@ -28,7 +28,8 @@ static inline unsigned long
>>>>> arch_calc_vm_flag_bits(unsigned long flags)
>>>>> * backed by tags-capable memory. The vm_flags may be
>>>>> overridden by a
>>>>> * filesystem supporting MTE (RAM-based).
>>>>> */
>>>>> - if (system_supports_mte() && (flags & MAP_ANONYMOUS))
>>>>> + if (system_supports_mte() &&
>>>>> + (flags & (MAP_ANONYMOUS | MAP_HUGETLB)))
>>>>> return VM_MTE_ALLOWED;
>>>>> return 0;
>>>>> diff --git a/arch/arm64/kernel/hibernate.c
>>>>> b/arch/arm64/kernel/hibernate.c
>>>>> index 02870beb271e..722e76f29141 100644
>>>>> --- a/arch/arm64/kernel/hibernate.c
>>>>> +++ b/arch/arm64/kernel/hibernate.c
>>>>> @@ -266,10 +266,17 @@ static int swsusp_mte_save_tags(void)
>>>>> max_zone_pfn = zone_end_pfn(zone);
>>>>> for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn;
>>>>> pfn++) {
>>>>> struct page *page = pfn_to_online_page(pfn);
>>>>> + struct folio *folio;
>>>>> if (!page)
>>>>> continue;
>>>>> + folio = page_folio(page);
>>>>> +
>>>>> + if (folio_test_hugetlb(folio) &&
>>>>> + !page_mte_tagged(&folio->page))
>>>>> + continue;
>>>>
>>>> Can we have folio_test_mte_tagged() whereby you make sure that only
>>>> folio_test_hugetlb() uses it for now (VM_WARN_ON_ONCE) and then make
>>>> sure that nobody uses page_mte_tagged() on hugetlb folios
>>>> (VM_WARN_ON_ONCE)?
>>>
>>>
>>> IIUC, you mean something like the below?
>>>
>>> bool folio_test_mte_tagged(struct folio *folio)
>>> {
>>> VM_WARN_ON_ONCE(!folio_test_hugetlb(folio));
>>>
>>> return test_bit(PG_mte_tagged, &folio->page->flags);
>>
>> folio->flags
>>
>>> }
>>>
>>> bool page_mte_tagged(struct page *page)
>>> {
>>> struct folio *folio = page_folio(page);
>>>
>>> VM_WARN_ON_ONCE(folio_test_hugetlb(folio));
>>
>> Yes, but better as
>>
>> VM_WARN_ON_ONCE(folio_test_hugetlb(page_folio(page)));
>
> I see. But I think all the call sites for folio_test_mte_tagged()
> actually need have folio_test_hugetlb() before it. So the warn seems not
> very useful other than warning on some misuse.
... well, that's the whole reason for them :)
>
>>
>>>
>>> return test_bit(PG_mte_tagged, &pge->flags);
>>> }
>>>
>>>>
>>>> Same for folio_set_mte_tagged() and other functions. We could slap a
>>>> "hugetlb" into the function names, but maybe in the future we'll be
>>>> able to use a single flag per folio (I know, it's complicated ...).
>>>
>>> A single flag per folio should be the future direction, but I haven't
>>> done the research so can't tell how complicated it will be.
>>
>> There were some discussions on it, and it's tricky. So maybe we should
>> really just have folio_test_hugetlb_mte_tagged() etc. for now
>
> Either w/ hugetlb or w/o hugetlb is fine, I don't have strong opinion on
> the naming.
Let's go with "hugetlb" for now, using it for other large folios is out
of sight a bit ... at least right now :)
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists