[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4a15dbaa-1614-ce-ce1f-f73959cef895@google.com>
Date: Wed, 17 May 2023 14:50:28 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Claudio Imbrenda <imbrenda@...ux.ibm.com>
cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Mike Rapoport <rppt@...nel.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Qi Zheng <zhengqi.arch@...edance.com>,
Russell King <linux@...linux.org.uk>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Greg Ungerer <gerg@...ux-m68k.org>,
Michal Simek <monstr@...str.eu>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Helge Deller <deller@....de>,
John David Anglin <dave.anglin@...l.net>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Alexandre Ghiti <alexghiti@...osinc.com>,
Palmer Dabbelt <palmer@...belt.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>,
"David S. Miller" <davem@...emloft.net>,
Chris Zankel <chris@...kel.net>,
Max Filippov <jcmvbkbc@...il.com>, x86@...nel.org,
linux-arm-kernel@...ts.infradead.org, linux-ia64@...r.kernel.org,
linux-m68k@...ts.linux-m68k.org, linux-mips@...r.kernel.org,
linux-parisc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linux-sh@...r.kernel.org, sparclinux@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 15/23] s390: allow pte_offset_map_lock() to fail
On Wed, 17 May 2023, Claudio Imbrenda wrote:
> On Tue, 9 May 2023 22:01:16 -0700 (PDT)
> Hugh Dickins <hughd@...gle.com> wrote:
>
> > In rare transient cases, not yet made possible, pte_offset_map() and
> > pte_offset_map_lock() may not find a page table: handle appropriately.
> >
> > Signed-off-by: Hugh Dickins <hughd@...gle.com>
> > ---
> > arch/s390/kernel/uv.c | 2 ++
> > arch/s390/mm/gmap.c | 2 ++
> > arch/s390/mm/pgtable.c | 12 +++++++++---
> > 3 files changed, 13 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
> > index cb2ee06df286..3c62d1b218b1 100644
> > --- a/arch/s390/kernel/uv.c
> > +++ b/arch/s390/kernel/uv.c
> > @@ -294,6 +294,8 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
> >
> > rc = -ENXIO;
> > ptep = get_locked_pte(gmap->mm, uaddr, &ptelock);
> > + if (!ptep)
> > + goto out;
You may or may not be asking about this instance too. When I looked at
how the code lower down handles -ENXIO (promoting it to -EFAULT if an
access fails, or to -EAGAIN to ask for a retry), this looked just right
(whereas using -EAGAIN here would be wrong: that expects a "page" which
has not been initialized at this point).
> > if (pte_present(*ptep) && !(pte_val(*ptep) & _PAGE_INVALID) && pte_write(*ptep)) {
> > page = pte_page(*ptep);
> > rc = -EAGAIN;
> > diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> > index dc90d1eb0d55..d198fc9475a2 100644
> > --- a/arch/s390/mm/gmap.c
> > +++ b/arch/s390/mm/gmap.c
> > @@ -2549,6 +2549,8 @@ static int __zap_zero_pages(pmd_t *pmd, unsigned long start,
> > spinlock_t *ptl;
> >
> > ptep = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> > + if (!ptep)
> > + break;
>
> so if pte_offset_map_lock fails, we abort and skip both the failed
> entry and the rest of the entries?
Yes.
>
> can pte_offset_map_lock be retried immediately if it fails? (consider
> that we currently don't allow THP with KVM guests)
>
> Would something like this:
>
> do {
> ptep = pte_offset_map_lock(...);
> mb(); /* maybe? */
> } while (!ptep);
>
> make sense?
No. But you're absolutely right to be asking: thank you for looking
into it so carefully - and I realize that it's hard at this stage to
judge what's appropriate, when I've not yet even posted the endpoint
of these changes, the patches which make it possible not to find a
page table here. And I'm intentionally keeping that vague, because
although I shall only introduce a THP case, I do expect it to be built
upon later in reclaiming empty page tables: it would be nice not to
have to change the arch code again when extending further.
My "rare transient cases" phrase may be somewhat misleading: one thing
that's wrong with your tight pte_offset_map_lock() loop above is that
the pmd entry pointing to page table may have been suddenly replaced by
a pmd_none() entry; and there's nothing in your loop above to break out
if that is so.
But if a page table is suddenly removed, that would be because it was
either empty, or replaced by a THP entry, or easily reconstructable on
demand (by that, I probably mean it was only mapping shared file pages,
which can just be refaulted if needed again).
The case you're wary of, is if the page table were removed briefly,
then put back shortly after: and still contains zero pages further down.
That's not something mm does now, nor at the end of my several series,
nor that I imagine us wanting to do in future: but I am struggling to
find a killer argument to persuade you that it could never be done -
most pages in a page table do need rmap tracking, which will BUG if
it's broken, but that argument happens not to apply to the zero page.
(Hmm, there could be somewhere, where we would find it convenient to
remove a page table with intent to do ...something, then validation
of that isolated page table fails, so we just put it back again.)
Is it good enough for me to promise you that we won't do that?
There are several ways in which we could change __zap_zero_pages(),
but I don't see them as actually dealing with the concern at hand.
One change, I've tended to make at the mm end but did not dare
to interfere here: it would seem more sensible to do a single
pte_offset_map_lock() outside the loop, return if that fails,
increment ptep inside the loop, pte_unmap_unlock() after the loop.
But perhaps you have preemption reasons for not wanting that; and
although it would eliminate the oddity of half-processing a page
table, it would not really resolve the problem at hand: because,
what if this page table got removed just before __zap_zero_pages()
tries to take the lock, then got put back just after?
Another change: I see __zap_zero_pages() is driven by walk_page_range(),
and over at the mm end I'm usually setting walk->action to ACTION_AGAIN
in these failure cases; but thought that an unnecessary piece of magic
here, and cannot see how it could actually help. Your "retry the whole
walk_page_range()" suggestion below would be a heavier equivalent of
that: but neither way gives confidence, if a page table could actually
be removed then reinserted without mmap_write_lock().
I think I want to keep this s390 __zap_zero_pages() issue in mind, it is
important and thank you for raising it; but don't see any change to the
patch as actually needed.
Hugh
>
>
> otherwise maybe it's better to return an error and retry the whole
> walk_page_range() in s390_enable_sie() ? it's a slow path anyway.
>
> > if (is_zero_pfn(pte_pfn(*ptep)))
> > ptep_xchg_direct(walk->mm, addr, ptep, __pte(_PAGE_INVALID));
> > pte_unmap_unlock(ptep, ptl);
>
> [...]
Powered by blists - more mailing lists