[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YsyzGMS+MS0kZoP8@monkey>
Date: Mon, 11 Jul 2022 16:32:40 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: James Houghton <jthoughton@...gle.com>
Cc: Muchun Song <songmuchun@...edance.com>,
Peter Xu <peterx@...hat.com>,
David Hildenbrand <david@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Jue Wang <juew@...gle.com>,
Manish Mishra <manish.mishra@...anix.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 07/26] hugetlb: add hugetlb_pte to track HugeTLB page
table entries
On 06/24/22 17:36, James Houghton wrote:
> After high-granularity mapping, page table entries for HugeTLB pages can
> be of any size/type. (For example, we can have a 1G page mapped with a
> mix of PMDs and PTEs.) This struct is to help keep track of a HugeTLB
> PTE after we have done a page table walk.
This has been rolling around in my head.
Will this first use case (live migration) actually make use of this
'mixed mapping' model where hugetlb pages could be mapped at the PUD,
PMD and PTE level all within the same vma? I only understand the use
case from a high level. But, it seems that we would want to only want
to migrate PTE (or PMD) sized pages and not necessarily a mix.
The only reason I ask is because the code might be much simpler if all
mappings within a vma were of the same size. Of course, the
performance/latency of converting a large mapping may be prohibitively
expensive.
Looking to the future when supporting memory error handling/page poisoning
it seems like we would certainly want multiple size mappings.
Just a thought.
--
Mike Kravetz
Powered by blists - more mailing lists