lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8030dfc5086e4e4e3709d6fcdab1e38f01fc38d.camel@infradead.org>
Date: Thu, 06 Feb 2025 12:07:58 +0100
From: Amit Shah <amit@...radead.org>
To: Ackerley Tng <ackerleytng@...gle.com>
Cc: tabba@...gle.com, quic_eberman@...cinc.com, roypat@...zon.co.uk, 
	jgg@...dia.com, peterx@...hat.com, david@...hat.com, rientjes@...gle.com, 
	fvdl@...gle.com, jthoughton@...gle.com, seanjc@...gle.com,
 pbonzini@...hat.com, 	zhiquan1.li@...el.com, fan.du@...el.com,
 jun.miao@...el.com, 	isaku.yamahata@...el.com, muchun.song@...ux.dev,
 mike.kravetz@...cle.com, 	erdemaktas@...gle.com, vannapurve@...gle.com,
 qperret@...gle.com, 	jhubbard@...dia.com, willy@...radead.org,
 shuah@...nel.org, brauner@...nel.org, 	bfoster@...hat.com,
 kent.overstreet@...ux.dev, pvorel@...e.cz, rppt@...nel.org, 
	richard.weiyang@...il.com, anup@...infault.org, haibo1.xu@...el.com, 
	ajones@...tanamicro.com, vkuznets@...hat.com,
 maciej.wieczor-retman@...el.com, 	pgonda@...gle.com,
 oliver.upton@...ux.dev, linux-kernel@...r.kernel.org, 	linux-mm@...ck.org,
 kvm@...r.kernel.org, linux-kselftest@...r.kernel.org, 
	linux-fsdevel@...ck.org
Subject: Re: [RFC PATCH 00/39] 1G page support for guest_memfd

On Mon, 2025-02-03 at 08:35 +0000, Ackerley Tng wrote:
> Amit Shah <amit@...radead.org> writes:
> 
> > Hey Ackerley,
> 
> Hi Amit,
> 
> > On Tue, 2024-09-10 at 23:43 +0000, Ackerley Tng wrote:
> > > Hello,
> > > 
> > > This patchset is our exploration of how to support 1G pages in
> > > guest_memfd, and
> > > how the pages will be used in Confidential VMs.
> > 
> > We've discussed this patchset at LPC and in the guest-memfd calls. 
> > Can
> > you please summarise the discussions here as a follow-up, so we can
> > also continue discussing on-list, and not repeat things that are
> > already discussed?
> 
> Thanks for this question! Since LPC, Vishal and I have been tied up
> with
> some Google internal work, which slowed down progress on 1G page
> support
> for guest_memfd. We will have progress this quarter and the next few
> quarters on 1G page support for guest_memfd.
> 
> The related updates are
> 
> 1. No objections on using hugetlb as the source of 1G pages.
> 
> 2. Prerequisite hugetlb changes.
> 
> + I've separated some of the prerequisite hugetlb changes into
> another
>   patch series hoping to have them merged ahead of and separately
> from
>   this patchset [1].
> + Peter Xu contributed a better patchset, including a bugfix [2].
> + I have an alternative [3].
> + The next revision of this series (1G page support for guest_memfd)
>   will be based on alternative [3]. I think there should be no issues
>   there.
> + I believe Peter is also waiting on the next revision before we make
>   further progress/decide on [2] or [3].
> 
> 3. No objections for allowing mmap()-ing of guest_memfd physical
> memory
>    when memory is marked shared to avoid double-allocation.
> 
> 4. No objections for splitting pages when marked shared.
> 
> 5. folio_put() callback for guest_memfd folio cleanup/merging.
> 
> + In Fuad's series [4], Fuad used the callback to reset the folio's
>   mappability status.
> + The catch is that the callback is only invoked when folio-
> >page_type
>   == PGTY_guest_memfd, and folio->page_type is a union with folio's
>   mapcount, so any folio with a non-zero mapcount cannot have a valid
>   page_type.
> + I was concerned that we might not get a callback, and hence
>   unintentionally skip merging pages and not correctly restore
> hugetlb
>   pages
> + This was discussed at the last guest_memfd upstream call (2025-01-
> 23
>   07:58 PST), and the conclusion is that using folio->page_type
> works,
>   because
>     + We only merge folios in two cases: (1) when converting to
> private
>       (2) when truncating folios (removing from filemap).
>     + When converting to private, in (1), we can forcibly unmap all
> the
>       converted pages or check if the mapcount is 0, and once
> mapcount
>       is 0 we can install the callback by setting folio->page_type =
>       PGTY_guest_memfd
>     + When truncating, we will be unmapping the folios anyway, so
>       mapcount is also 0 and we can install the callback.
> 
> Hope that covers the points that you're referring to. If there are
> other
> parts that you'd like to know the status on, please let me know which
> aspects those are!

Thank you for the nice summary!

> > Also - as mentioned in those meetings, we at AMD are interested in
> > this
> > series along with SEV-SNP support - and I'm also interested in
> > figuring
> > out how we collaborate on the evolution of this series.
> 
> Thanks all your help and comments during the guest_memfd upstream
> calls,
> and thanks for the help from AMD.
> 
> Extending mmap() support from Fuad with 1G page support introduces
> more
> states that made it more complicated (at least for me).
> 
> I'm modeling the states in python so I can iterate more quickly. I
> also
> have usage flows (e.g. allocate, guest_use, host_use,
> transient_folio_get, close, transient_folio_put) as test cases.
> 
> I'm almost done with the model and my next steps are to write up a
> state
> machine (like Fuad's [5]) and share that.
> 
> I'd be happy to share the python model too but I have to work through
> some internal open-sourcing processes first, so if you think this
> will
> be useful, let me know!

No problem.  Yes, I'm interested in this - it'll be helpful!

The other thing of note is that while we have the kernel patches, a
userspace to drive them and exercise them is currently missing.

> Then, I'll code it all up in a new revision of this series (target:
> March 2025), which will be accompanied by source code on GitHub.
> 
> I'm happy to collaborate more closely, let me know if you have ideas
> for
> collaboration!

Thank you.  I think currently the bigger problem we have is allocation
of hugepages -- which is also blocking a lot of the follow-on work. 
Vishal briefly mentioned isolating pages from Linux entirely last time
- that's also what I'm interested in to figure out if we can completely
bypass the allocation problem by not allocating struct pages for non-
host use pages entirely.  The guest_memfs/KHO/kexec/live-update patches
also take this approach on AWS (for how their VMs are launched).  If we
work with those patches together, allocation of 1G hugepages is
simplified.  I'd like to discuss more on these themes to see if this is
an approach that helps as well.


		Amit

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ