[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230302143154.1886c213@thinkpad-T15>
Date: Thu, 2 Mar 2023 14:31:54 +0100
From: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
To: "Matthew Wilcox (Oracle)" <willy@...radead.org>
Cc: linux-mm@...ck.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>, linux-s390@...r.kernel.org
Subject: Re: [PATCH v3 21/34] s390: Implement the new page table range API
On Tue, 28 Feb 2023 21:37:24 +0000
"Matthew Wilcox (Oracle)" <willy@...radead.org> wrote:
> Add set_ptes() and update_mmu_cache_range().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> Cc: Heiko Carstens <hca@...ux.ibm.com>
> Cc: Vasily Gorbik <gor@...ux.ibm.com>
> Cc: Alexander Gordeev <agordeev@...ux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@...ux.ibm.com>
> Cc: Sven Schnelle <svens@...ux.ibm.com>
> Cc: linux-s390@...r.kernel.org
> ---
> arch/s390/include/asm/pgtable.h | 34 ++++++++++++++++++++++++---------
> 1 file changed, 25 insertions(+), 9 deletions(-)
>
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index 2c70b4d1263d..46bf475116f1 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -50,6 +50,7 @@ void arch_report_meminfo(struct seq_file *m);
> * tables contain all the necessary information.
> */
> #define update_mmu_cache(vma, address, ptep) do { } while (0)
> +#define update_mmu_cache_range(vma, addr, ptep, nr) do { } while (0)
> #define update_mmu_cache_pmd(vma, address, ptep) do { } while (0)
>
> /*
> @@ -1317,21 +1318,36 @@ pgprot_t pgprot_writecombine(pgprot_t prot);
> pgprot_t pgprot_writethrough(pgprot_t prot);
>
> /*
> - * Certain architectures need to do special things when PTEs
> - * within a page table are directly modified. Thus, the following
> - * hook is made available.
> + * Set multiple PTEs to consecutive pages with a single call. All PTEs
> + * are within the same folio, PMD and VMA.
> */
> -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
> - pte_t *ptep, pte_t entry)
> +static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
> + pte_t *ptep, pte_t entry, unsigned int nr)
> {
> if (pte_present(entry))
> entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED));
> - if (mm_has_pgste(mm))
> - ptep_set_pte_at(mm, addr, ptep, entry);
> - else
> - set_pte(ptep, entry);
> + if (mm_has_pgste(mm)) {
> + for (;;) {
> + ptep_set_pte_at(mm, addr, ptep, entry);
There might be room for additional optimization here, regarding the
preempt_disable/enable() in ptep_set_pte_at(), i.e. move it out of
ptep_set_pte_at() and do it only once in this loop.
We could add that later with an add-on patch, but for this series it
all looks good.
Reviewed-by: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>
Powered by blists - more mailing lists