[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZBGZmpf0n+KyyJNU@kernel.org>
Date: Wed, 15 Mar 2023 12:10:34 +0200
From: Mike Rapoport <rppt@...nel.org>
To: "Matthew Wilcox (Oracle)" <willy@...radead.org>
Cc: linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>, linux-s390@...r.kernel.org
Subject: Re: [PATCH v4 22/36] s390: Implement the new page table range API
On Wed, Mar 15, 2023 at 05:14:30AM +0000, Matthew Wilcox (Oracle) wrote:
> Add set_ptes() and update_mmu_cache_range().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> Reviewed-by: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
> Cc: Heiko Carstens <hca@...ux.ibm.com>
> Cc: Vasily Gorbik <gor@...ux.ibm.com>
> Cc: Alexander Gordeev <agordeev@...ux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@...ux.ibm.com>
> Cc: Sven Schnelle <svens@...ux.ibm.com>
> Cc: linux-s390@...r.kernel.org
Acked-by: Mike Rapoport (IBM) <rppt@...nel.org>
> ---
> arch/s390/include/asm/pgtable.h | 33 ++++++++++++++++++++++++---------
> 1 file changed, 24 insertions(+), 9 deletions(-)
>
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index c1f6b46ec555..fea678c67e51 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -50,6 +50,7 @@ void arch_report_meminfo(struct seq_file *m);
> * tables contain all the necessary information.
> */
> #define update_mmu_cache(vma, address, ptep) do { } while (0)
> +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0)
> #define update_mmu_cache_pmd(vma, address, ptep) do { } while (0)
>
> /*
> @@ -1319,20 +1320,34 @@ pgprot_t pgprot_writecombine(pgprot_t prot);
> pgprot_t pgprot_writethrough(pgprot_t prot);
>
> /*
> - * Certain architectures need to do special things when PTEs
> - * within a page table are directly modified. Thus, the following
> - * hook is made available.
> + * Set multiple PTEs to consecutive pages with a single call. All PTEs
> + * are within the same folio, PMD and VMA.
> */
> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> - pte_t *ptep, pte_t entry)
> +static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
> + pte_t *ptep, pte_t entry, unsigned int nr)
> {
> if (pte_present(entry))
> entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED));
> - if (mm_has_pgste(mm))
> - ptep_set_pte_at(mm, addr, ptep, entry);
> - else
> - set_pte(ptep, entry);
> + if (mm_has_pgste(mm)) {
> + for (;;) {
> + ptep_set_pte_at(mm, addr, ptep, entry);
> + if (--nr == 0)
> + break;
> + ptep++;
> + entry = __pte(pte_val(entry) + PAGE_SIZE);
> + addr += PAGE_SIZE;
> + }
> + } else {
> + for (;;) {
> + set_pte(ptep, entry);
> + if (--nr == 0)
> + break;
> + ptep++;
> + entry = __pte(pte_val(entry) + PAGE_SIZE);
> + }
> + }
> }
> +#define set_ptes set_ptes
>
> /*
> * Conversion functions: convert a page and protection to a page entry,
> --
> 2.39.2
>
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists