[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZtDThWS16Kv0QKR1@x1n>
Date: Thu, 29 Aug 2024 16:01:09 -0400
From: Peter Xu <peterx@...hat.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Gavin Shan <gshan@...hat.com>,
Catalin Marinas <catalin.marinas@....com>, x86@...nel.org,
Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Alistair Popple <apopple@...dia.com>, kvm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
Sean Christopherson <seanjc@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
Jason Gunthorpe <jgg@...dia.com>, Borislav Petkov <bp@...en8.de>,
Zi Yan <ziy@...dia.com>, Axel Rasmussen <axelrasmussen@...gle.com>,
Yan Zhao <yan.y.zhao@...el.com>, Will Deacon <will@...nel.org>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Alex Williamson <alex.williamson@...hat.com>
Subject: Re: [PATCH v2 07/19] mm/fork: Accept huge pfnmap entries
On Thu, Aug 29, 2024 at 09:44:01PM +0200, David Hildenbrand wrote:
> On 29.08.24 20:26, Peter Xu wrote:
> > On Thu, Aug 29, 2024 at 05:10:42PM +0200, David Hildenbrand wrote:
> > > On 26.08.24 22:43, Peter Xu wrote:
> > > > Teach the fork code to properly copy pfnmaps for pmd/pud levels. Pud is
> > > > much easier, the write bit needs to be persisted though for writable and
> > > > shared pud mappings like PFNMAP ones, otherwise a follow up write in either
> > > > parent or child process will trigger a write fault.
> > > >
> > > > Do the same for pmd level.
> > > >
> > > > Signed-off-by: Peter Xu <peterx@...hat.com>
> > > > ---
> > > > mm/huge_memory.c | 29 ++++++++++++++++++++++++++---
> > > > 1 file changed, 26 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > index e2c314f631f3..15418ffdd377 100644
> > > > --- a/mm/huge_memory.c
> > > > +++ b/mm/huge_memory.c
> > > > @@ -1559,6 +1559,24 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > > pgtable_t pgtable = NULL;
> > > > int ret = -ENOMEM;
> > > > + pmd = pmdp_get_lockless(src_pmd);
> > > > + if (unlikely(pmd_special(pmd))) {
> > >
> > > I assume I have to clean up your mess here as well?
> >
> > Can you leave meaningful and explicit comment? I'll try to address.
>
> Sorry Peter, but I raised all that as reply to v1. For example, I stated
> that vm_normal_page_pmd() already *exist* and why these pmd_special() checks
> should be kept there.
We discussed the usage of pmd_page() but I don't think this is clear you
suggest it to be used there. IOW, copy_huge_pmd() doesn't use
vm_normal_page_pmd() yet so far and I'm not sure whether it's always safe.
E.g. at least one thing I spot is vm_normal_page_pmd() returns NULL for
huge zeropage pmd but here in fork() we need to take a ref with
mm_get_huge_zero_folio().
>
> I hear you, you're not interested in cleaning that up. So at this point it's
> easier for me to clean it up myself.
It might be easier indeed you provide a patch that you think the best.
Then I'll leave that to you, and I'll send the solo fixup patch to be
squashed soon to the list.
--
Peter Xu
Powered by blists - more mailing lists