[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fdd2be0a-cae9-4508-ba20-eb04c9a1e7f9@suse.cz>
Date: Mon, 21 Oct 2024 15:27:55 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Matthew Wilcox <willy@...radead.org>, "Paul E . McKenney"
<paulmck@...nel.org>, Jann Horn <jannh@...gle.com>,
David Hildenbrand <david@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Muchun Song <muchun.song@...ux.dev>,
Richard Henderson <richard.henderson@...aro.org>,
Ivan Kokshaysky <ink@...assic.park.msu.ru>, Matt Turner
<mattst88@...il.com>, Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
"James E . J . Bottomley" <James.Bottomley@...senPartnership.com>,
Helge Deller <deller@....de>, Chris Zankel <chris@...kel.net>,
Max Filippov <jcmvbkbc@...il.com>, Arnd Bergmann <arnd@...db.de>,
linux-alpha@...r.kernel.org, linux-mips@...r.kernel.org,
linux-parisc@...r.kernel.org, linux-arch@...r.kernel.org,
Shuah Khan <shuah@...nel.org>, Christian Brauner <brauner@...nel.org>,
linux-kselftest@...r.kernel.org, Sidhartha Kumar
<sidhartha.kumar@...cle.com>, Jeff Xu <jeffxu@...omium.org>,
Christoph Hellwig <hch@...radead.org>, linux-api@...r.kernel.org,
John Hubbard <jhubbard@...dia.com>
Subject: Re: [PATCH v2 1/5] mm: pagewalk: add the ability to install PTEs
On 10/20/24 18:20, Lorenzo Stoakes wrote:
> The existing generic pagewalk logic permits the walking of page tables,
> invoking callbacks at individual page table levels via user-provided
> mm_walk_ops callbacks.
>
> This is useful for traversing existing page table entries, but precludes
> the ability to establish new ones.
>
> Existing mechanism for performing a walk which also installs page table
> entries if necessary are heavily duplicated throughout the kernel, each
> with semantic differences from one another and largely unavailable for use
> elsewhere.
>
> Rather than add yet another implementation, we extend the generic pagewalk
> logic to enable the installation of page table entries by adding a new
> install_pte() callback in mm_walk_ops. If this is specified, then upon
> encountering a missing page table entry, we allocate and install a new one
> and continue the traversal.
>
> If a THP huge page is encountered, we make use of existing logic to split
> it. Then once we reach the PTE level, we invoke the install_pte() callback
> which provides a PTE entry to install. We do not support hugetlb at this
> stage.
>
> If this function returns an error, or an allocation fails during the
> operation, we abort the operation altogether. It is up to the caller to
> deal appropriately with partially populated page table ranges.
>
> If install_pte() is defined, the semantics of pte_entry() change - this
> callback is then only invoked if the entry already exists. This is a useful
> property, as it allows a caller to handle existing PTEs while installing
> new ones where necessary in the specified range.
>
> If install_pte() is not defined, then there is no functional difference to
> this patch, so all existing logic will work precisely as it did before.
>
> As we only permit the installation of PTEs where a mapping does not already
> exist there is no need for TLB management, however we do invoke
> update_mmu_cache() for architectures which require manual maintenance of
> mappings for other CPUs.
>
> We explicitly do not allow the existing page walk API to expose this
> feature as it is dangerous and intended for internal mm use only. Therefore
> we provide a new walk_page_range_mm() function exposed only to
> mm/internal.h.
>
> Reviewed-by: Jann Horn <jannh@...gle.com>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
<snip>
> /*
> * We want to know the real level where a entry is located ignoring any
> * folding of levels which may be happening. For example if p4d is folded then
> @@ -29,9 +34,23 @@ static int walk_pte_range_inner(pte_t *pte, unsigned long addr,
> int err = 0;
>
> for (;;) {
> - err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
> - if (err)
> - break;
> + if (ops->install_pte && pte_none(ptep_get(pte))) {
> + pte_t new_pte;
> +
> + err = ops->install_pte(addr, addr + PAGE_SIZE, &new_pte,
> + walk);
> + if (err)
> + break;
> +
> + set_pte_at(walk->mm, addr, pte, new_pte);
While the guard pages install ptes unconditionally, maybe some install_pte
handler implementation would sometimes want to skip, should ve define an
error code that means its skipped and just continue instead of set_pte_at()?
Or leave it until such handler appears.
> + /* Non-present before, so for arches that need it. */
> + if (!WARN_ON_ONCE(walk->no_vma))
> + update_mmu_cache(walk->vma, addr, pte);
> + } else {
> + err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
> + if (err)
> + break;
> + }
> if (addr >= end - PAGE_SIZE)
> break;
> addr += PAGE_SIZE;
> @@ -89,11 +108,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
> again:
> next = pmd_addr_end(addr, end);
> if (pmd_none(*pmd)) {
> - if (ops->pte_hole)
> + if (ops->install_pte)
> + err = __pte_alloc(walk->mm, pmd);
> + else if (ops->pte_hole)
> err = ops->pte_hole(addr, next, depth, walk);
> if (err)
> break;
> - continue;
> + if (!ops->install_pte)
> + continue;
> }
>
> walk->action = ACTION_SUBTREE;
> @@ -116,7 +138,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
> */
> if ((!walk->vma && (pmd_leaf(*pmd) || !pmd_present(*pmd))) ||
> walk->action == ACTION_CONTINUE ||
> - !(ops->pte_entry))
> + !(ops->pte_entry || ops->install_pte))
> continue;
BTW, I find it hard to read this condition even before your patch, oh well.
But if I read it correctly, does it mean we're going to split a pmd-mapped
THP if we have a install_pte handler? But is that really necessary if the
pmd splitting results in all ptes populated, and thus the install_pte
handler can't do anything with any pte anyway?
> if (walk->vma)
> @@ -148,11 +170,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
> again:
> next = pud_addr_end(addr, end);
> if (pud_none(*pud)) {
> - if (ops->pte_hole)
> + if (ops->install_pte)
> + err = __pmd_alloc(walk->mm, pud, addr);
> + else if (ops->pte_hole)
> err = ops->pte_hole(addr, next, depth, walk);
> if (err)
> break;
> - continue;
> + if (!ops->install_pte)
> + continue;
> }
>
> walk->action = ACTION_SUBTREE;
> @@ -167,7 +192,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>
> if ((!walk->vma && (pud_leaf(*pud) || !pud_present(*pud))) ||
> walk->action == ACTION_CONTINUE ||
> - !(ops->pmd_entry || ops->pte_entry))
> + !(ops->pmd_entry || ops->pte_entry || ops->install_pte))
> continue;
Ditto?
Powered by blists - more mailing lists