[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y8nNHKW0sTnrq8hw@x1n>
Date: Thu, 19 Jan 2023 18:07:08 -0500
From: Peter Xu <peterx@...hat.com>
To: James Houghton <jthoughton@...gle.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>,
David Hildenbrand <david@...hat.com>,
Muchun Song <songmuchun@...edance.com>,
David Rientjes <rientjes@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Zach O'Keefe <zokeefe@...gle.com>,
Manish Mishra <manish.mishra@...anix.com>,
Naoya Horiguchi <naoya.horiguchi@....com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Yang Shi <shy828301@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for
walk_hugetlb_range
On Thu, Jan 19, 2023 at 02:35:12PM -0800, James Houghton wrote:
> On Thu, Jan 19, 2023 at 2:23 PM Peter Xu <peterx@...hat.com> wrote:
> >
> > On Thu, Jan 19, 2023 at 02:00:32PM -0800, Mike Kravetz wrote:
> > > I do not know much about the (primary) live migration use case. My
> > > guess is that page table lock contention may be an issue? In this use
> > > case, HGM is only enabled for the duration the live migration operation,
> > > then a MADV_COLLAPSE is performed. If contention is likely to be an
> > > issue during this time, then yes we would need to pass around with
> > > something like hugetlb_pte.
> >
> > I'm not aware of any such contention issue. IMHO the migration problem is
> > majorly about being too slow transferring a page being so large. Shrinking
> > the page size should resolve the major problem already here IIUC.
>
> This will be problematic if you scale up VMs to be quite large.
Do you mean that for the postcopy use case one can leverage e.g. 2M
mappings (over 1G) to avoid lock contentions when VM is large? I agree it
should be more efficient than having 512 4K page installed, but I think
it'll make the page fault resolution slower too if some thead is only
looking for a 4k portion of it.
> Google upstreamed the "TDP MMU" for KVM/x86 that removed the need to take
> the MMU lock for writing in the EPT violation path. We found that this
> change is required for VMs >200 or so vCPUs to consistently avoid CPU
> soft lockups in the guest.
After the kvm mmu rwlock convertion, it'll allow concurrent page faults
even if only 4K pages are used, so it seems not directly relevant to what
we're discussing here, no?
>
> Requiring each UFFDIO_CONTINUE (in the post-copy path) to serialize on
> the same PTL would be problematic in the same way.
Pte-level pgtable lock only covers 2M range, so I think it depends on which
is the address that the vcpu is faulted on? IIUC the major case should be
that the faulted threads are not falling upon the same 2M range.
>
> >
> > AFAIU 4K-only solution should only reduce any lock contention because locks
> > will always be pte-level if VM_HUGETLB_HGM set. When walking and creating
> > the intermediate pgtable entries we can use atomic ops just like generic
> > mm, so no lock needed at all. With uncertainty on the size of mappings,
> > we'll need to take any of the multiple layers of locks.
> >
>
> Other than taking the HugeTLB VMA lock for reading, walking/allocating
> page tables won't need any additional locking.
Actually when revisiting the locks I'm getting a bit confused on whether
the vma lock is needed if pmd sharing is anyway forbidden for HGM. I
raised a question in the other patch of MADV_COLLAPSE, maybe they're
related questions so we can keep it there.
>
> We take the PTL to allocate the next level down, but so does generic
> mm (look at __pud_alloc, __pmd_alloc for example). Maybe I am
> misunderstanding.
Sorry you're right, please ignore that. I don't know why I had that
impression that spinlocks are not needed in that process.
Actually I am also curious why atomics won't work (by holding mmap read
lock, then do cmpxchg(old_entry=0, new_entry) upon the pgtable entries). I
think it's possible I just missed something else.
--
Peter Xu
Powered by blists - more mailing lists