[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240507183307.3336dabc@p-imbrenda.boeblingen.de.ibm.com>
Date: Tue, 7 May 2024 18:33:07 +0200
From: Claudio Imbrenda <imbrenda@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-s390@...r.kernel.org, Heiko Carstens <hca@...ux.ibm.com>,
Vasily
Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle
<svens@...ux.ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>,
Gerald
Schaefer <gerald.schaefer@...ux.ibm.com>,
Matthew Wilcox
<willy@...radead.org>, Thomas Huth <thuth@...hat.com>
Subject: Re: [PATCH v2 10/10] s390/hugetlb: convert PG_arch_1 code to work
on folio->flags
On Fri, 12 Apr 2024 16:21:20 +0200
David Hildenbrand <david@...hat.com> wrote:
> Let's make it clearer that we are always working on folio flags and
> never page flags of tail pages.
please be a little more verbose, and explain what you are doing (i.e.
converting usages of page flags to folio flags), not just why.
>
> Signed-off-by: David Hildenbrand <david@...hat.com>
with a few extra words in the description:
Reviewed-by: Claudio Imbrenda <imbrenda@...ux.ibm.com>
> ---
> arch/s390/mm/gmap.c | 4 ++--
> arch/s390/mm/hugetlbpage.c | 8 ++++----
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 0351cb139df4..9eea05cd93b7 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2648,7 +2648,7 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
> {
> pmd_t *pmd = (pmd_t *)pte;
> unsigned long start, end;
> - struct page *page = pmd_page(*pmd);
> + struct folio *folio = page_folio(pmd_page(*pmd));
>
> /*
> * The write check makes sure we do not set a key on shared
> @@ -2663,7 +2663,7 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
> start = pmd_val(*pmd) & HPAGE_MASK;
> end = start + HPAGE_SIZE - 1;
> __storage_key_init_range(start, end);
> - set_bit(PG_arch_1, &page->flags);
> + set_bit(PG_arch_1, &folio->flags);
> cond_resched();
> return 0;
> }
> diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
> index c2e8242bd15d..a32047315f9a 100644
> --- a/arch/s390/mm/hugetlbpage.c
> +++ b/arch/s390/mm/hugetlbpage.c
> @@ -121,7 +121,7 @@ static inline pte_t __rste_to_pte(unsigned long rste)
>
> static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste)
> {
> - struct page *page;
> + struct folio *folio;
> unsigned long size, paddr;
>
> if (!mm_uses_skeys(mm) ||
> @@ -129,16 +129,16 @@ static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste)
> return;
>
> if ((rste & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3) {
> - page = pud_page(__pud(rste));
> + folio = page_folio(pud_page(__pud(rste)));
> size = PUD_SIZE;
> paddr = rste & PUD_MASK;
> } else {
> - page = pmd_page(__pmd(rste));
> + folio = page_folio(pmd_page(__pmd(rste)));
> size = PMD_SIZE;
> paddr = rste & PMD_MASK;
> }
>
> - if (!test_and_set_bit(PG_arch_1, &page->flags))
> + if (!test_and_set_bit(PG_arch_1, &folio->flags))
> __storage_key_init_range(paddr, paddr + size - 1);
> }
>
Powered by blists - more mailing lists