[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231003-555f517e872d4d53ff8d2b02@orel>
Date: Tue, 3 Oct 2023 09:42:00 +0200
From: Andrew Jones <ajones@...tanamicro.com>
To: Alexandre Ghiti <alexghiti@...osinc.com>
Cc: Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Qinglin Pan <panqinglin2020@...as.ac.cn>,
Ryan Roberts <ryan.roberts@....com>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -fixes 2/2] riscv: Fix set_huge_pte_at() for NAPOT
mappings when a swap entry is set
On Thu, Sep 28, 2023 at 05:18:46PM +0200, Alexandre Ghiti wrote:
> We used to determine the number of page table entries to set for a NAPOT
> hugepage by using the pte value which actually fails when the pte to set is
> a swap entry.
>
> So take advantage of a recent fix for arm64 reported in [1] which
> introduces the size of the mapping as an argument of set_huge_pte_at(): we
> can then use this size to compute the number of page table entries to set
> for a NAPOT region.
>
> Fixes: 82a1a1f3bfb6 ("riscv: mm: support Svnapot in hugetlb page")
> Reported-by: Ryan Roberts <ryan.roberts@....com>
> Closes: https://lore.kernel.org/linux-arm-kernel/20230922115804.2043771-1-ryan.roberts@arm.com/ [1]
> Signed-off-by: Alexandre Ghiti <alexghiti@...osinc.com>
> ---
> arch/riscv/mm/hugetlbpage.c | 19 +++++++++++++------
> 1 file changed, 13 insertions(+), 6 deletions(-)
>
> diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
> index e4a2ace92dbe..b52f0210481f 100644
> --- a/arch/riscv/mm/hugetlbpage.c
> +++ b/arch/riscv/mm/hugetlbpage.c
> @@ -183,15 +183,22 @@ void set_huge_pte_at(struct mm_struct *mm,
> pte_t pte,
> unsigned long sz)
> {
> + unsigned long hugepage_shift;
> int i, pte_num;
>
> - if (!pte_napot(pte)) {
> - set_pte_at(mm, addr, ptep, pte);
> - return;
> - }
> + if (sz >= PGDIR_SIZE)
> + hugepage_shift = PGDIR_SHIFT;
> + else if (sz >= P4D_SIZE)
> + hugepage_shift = P4D_SHIFT;
> + else if (sz >= PUD_SIZE)
> + hugepage_shift = PUD_SHIFT;
> + else if (sz >= PMD_SIZE)
> + hugepage_shift = PMD_SHIFT;
> + else
> + hugepage_shift = PAGE_SHIFT;
>
> - pte_num = napot_pte_num(napot_cont_order(pte));
> - for (i = 0; i < pte_num; i++, ptep++, addr += PAGE_SIZE)
> + pte_num = sz >> hugepage_shift;
> + for (i = 0; i < pte_num; i++, ptep++, addr += (1 << hugepage_shift))
> set_pte_at(mm, addr, ptep, pte);
> }
>
So a 64k napot, for example, will fall into the PAGE_SHIFT arm, but then
we'll calculate 16 for pte_num. Looks good to me.
Reviewed-by: Andrew Jones <ajones@...tanamicro.com>
Thanks,
drew
Powered by blists - more mailing lists