[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230525092351.7671f2a4@p-imbrenda>
Date: Thu, 25 May 2023 09:23:51 +0200
From: Claudio Imbrenda <imbrenda@...ux.ibm.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Mike Rapoport <rppt@...nel.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Qi Zheng <zhengqi.arch@...edance.com>,
Russell King <linux@...linux.org.uk>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Greg Ungerer <gerg@...ux-m68k.org>,
Michal Simek <monstr@...str.eu>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Helge Deller <deller@....de>,
John David Anglin <dave.anglin@...l.net>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Alexandre Ghiti <alexghiti@...osinc.com>,
Palmer Dabbelt <palmer@...belt.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
John Paul Adrian Glaubitz <glaubitz@...sik.fu-berlin.de>,
"David S. Miller" <davem@...emloft.net>,
Chris Zankel <chris@...kel.net>,
Max Filippov <jcmvbkbc@...il.com>, x86@...nel.org,
linux-arm-kernel@...ts.infradead.org, linux-ia64@...r.kernel.org,
linux-m68k@...ts.linux-m68k.org, linux-mips@...r.kernel.org,
linux-parisc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org,
linux-sh@...r.kernel.org, sparclinux@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 15/23] s390: allow pte_offset_map_lock() to fail
On Tue, 23 May 2023 18:49:14 -0700 (PDT)
Hugh Dickins <hughd@...gle.com> wrote:
> On Tue, 23 May 2023, Claudio Imbrenda wrote:
> >
> > so if I understand the above correctly, pte_offset_map_lock will only
> > fail if the whole page table has disappeared, and in that case, it will
> > never reappear with zero pages, therefore we can safely skip (in that
> > case just break). if we were to do a continue instead of a break, we
> > would most likely fail again anyway.
>
> Yes, that's the most likely; and you hold mmap_write_lock() there,
> and VM_NOHUGEPAGE on all vmas, so I think it's the only foreseeable
> possibility.
>
> >
> > in that case I would still like a small change in your patch: please
> > write a short (2~3 lines max) comment about why it's ok to do things
> > that way
>
> Sure.
>
> But I now see that I've disobeyed you, and gone to 4 lines (but in the
> comment above the function, so as not to distract from the code itself):
> is this good wording to you? I needed to research how they were stopped
> from coming in afterwards, so wanted to put something greppable in there.
>
> And, unless I'm misunderstanding, that "after THP was enabled" was
> always supposed to say "after THP was disabled" (because splitting a
> huge zero page pmd inserts a a page table full of little zero ptes).
indeed, thanks for noticing and fixing it
>
> Or would you prefer the comment in the commit message instead,
> or down just above the pte_offset_map_lock() line?
>
> It would much better if I could find one place at the mm end, to
> enforce its end of the contract; but cannot think how to do that.
>
> Hugh
>
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2537,7 +2537,12 @@ static inline void thp_split_mm(struct mm_struct *mm)
> * Remove all empty zero pages from the mapping for lazy refaulting
> * - This must be called after mm->context.has_pgste is set, to avoid
> * future creation of zero pages
> - * - This must be called after THP was enabled
> + * - This must be called after THP was disabled.
> + *
> + * mm contracts with s390, that even if mm were to remove a page table,
> + * racing with the loop below and so causing pte_offset_map_lock() to fail,
> + * it will never insert a page table containing empty zero pages once
> + * mm_forbids_zeropage(mm) i.e. mm->context.has_pgste is set.
> */
> static int __zap_zero_pages(pmd_t *pmd, unsigned long start,
> unsigned long end, struct mm_walk *walk)
looks good, thanks
Powered by blists - more mailing lists