lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <diqz1pq85cvq.fsf@ackerleytng-ctop.c.googlers.com>
Date: Tue, 22 Jul 2025 10:55:21 -0700
From: Ackerley Tng <ackerleytng@...gle.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: vannapurve@...gle.com, pbonzini@...hat.com, seanjc@...gle.com, 
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org, x86@...nel.org, 
	rick.p.edgecombe@...el.com, dave.hansen@...el.com, kirill.shutemov@...el.com, 
	tabba@...gle.com, quic_eberman@...cinc.com, michael.roth@....com, 
	david@...hat.com, vbabka@...e.cz, jroedel@...e.de, thomas.lendacky@....com, 
	pgonda@...gle.com, zhiquan1.li@...el.com, fan.du@...el.com, 
	jun.miao@...el.com, ira.weiny@...el.com, isaku.yamahata@...el.com, 
	xiaoyao.li@...el.com, binbin.wu@...ux.intel.com, chao.p.peng@...el.com
Subject: Re: [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge pages

Yan Zhao <yan.y.zhao@...el.com> writes:

> On Mon, Jul 21, 2025 at 10:33:14PM -0700, Ackerley Tng wrote:
>> Yan Zhao <yan.y.zhao@...el.com> writes:
>> 
>> > On Wed, Jul 16, 2025 at 01:57:55PM -0700, Ackerley Tng wrote:
>> >> Yan Zhao <yan.y.zhao@...el.com> writes:
>> >> 
>> >> > On Thu, Jun 05, 2025 at 03:35:50PM -0700, Ackerley Tng wrote:
>> >> >> Yan Zhao <yan.y.zhao@...el.com> writes:
>> >> >> 
>> >> >> > On Wed, Jun 04, 2025 at 01:02:54PM -0700, Ackerley Tng wrote:
>> >> >> >> Hi Yan,
>> >> >> >> 
>> >> >> >> While working on the 1G (aka HugeTLB) page support for guest_memfd
>> >> >> >> series [1], we took into account conversion failures too. The steps are
>> >> >> >> in kvm_gmem_convert_range(). (It might be easier to pull the entire
>> >> >> >> series from GitHub [2] because the steps for conversion changed in two
>> >> >> >> separate patches.)
>> >> >> > ...
>> >> >> >> [2] https://github.com/googleprodkernel/linux-cc/tree/gmem-1g-page-support-rfc-v2
>> >> >> >
>> >> >> > Hi Ackerley,
>> >> >> > Thanks for providing this branch.
>> >> >> 
>> >> >> Here's the WIP branch [1], which I initially wasn't intending to make
>> >> >> super public since it's not even RFC standard yet and I didn't want to
>> >> >> add to the many guest_memfd in-flight series, but since you referred to
>> >> >> it, [2] is a v2 of the WIP branch :)
>> >> >> 
>> >> >> [1] https://github.com/googleprodkernel/linux-cc/commits/wip-tdx-gmem-conversions-hugetlb-2mept
>> >> >> [2] https://github.com/googleprodkernel/linux-cc/commits/wip-tdx-gmem-conversions-hugetlb-2mept-v2
>> >> > Hi Ackerley,
>> >> >
>> >> > I'm working on preparing TDX huge page v2 based on [2] from you. The current
>> >> > decision is that the code base of TDX huge page v2 needs to include DPAMT
>> >> > and VM shutdown optimization as well.
>> >> >
>> >> > So, we think kvm-x86/next is a good candidate for us.
>> >> > (It is in repo https://github.com/kvm-x86/linux.git
>> >> >  commit 87198fb0208a (tag: kvm-x86-next-2025.07.15, kvm-x86/next) Merge branch 'vmx',
>> >> >  which already includes code for VM shutdown optimization).
>> >> > I still need to port DPAMT + gmem 1G + TDX huge page v2 on top it.
>> >> >
>> >> > Therefore, I'm wondering if the rebase of [2] onto kvm-x86/next can be done
>> >> > from your side. A straightforward rebase is sufficient, with no need for
>> >> > any code modification. And it's better to be completed by the end of next
>> >> > week.
>> >> >
>> >> > We thought it might be easier for you to do that (but depending on your
>> >> > bandwidth), allowing me to work on the DPAMT part for TDX huge page v2 in
>> >> > parallel.
>> >> >
>> >> 
>> >> I'm a little tied up with some internal work, is it okay if, for the
>> > No problem.
>> >
>> >> next RFC, you base the changes that you need to make for TDX huge page
>> >> v2 and DPAMT on the base of [2]?
>> >
>> >> That will save both of us the rebasing. [2] was also based on (some
>> >> other version of) kvm/next.
>> >> 
>> >> I think it's okay since the main goal is to show that it works. I'll
>> >> let you know when I can get to a guest_memfd_HugeTLB v3 (and all the
>> >> other patches that go into [2]).
>> > Hmm, the upstream practice is to post code based on latest version, and
>> > there're lots TDX relates fixes in latest kvm-x86/next.
>> >
>> 
>> Yup I understand.
>> 
>> For guest_memfd//HugeTLB I'm still waiting for guest_memfd//mmap
>> (managed by Fuad) to settle, and there are plenty of comments for the
>> guest_memfd//conversion component to iron out still, so the full update
>> to v3 will take longer than I think you want to wait.
>> 
>> I'd say for RFCs it's okay to post patch series based on some snapshot,
>> since there are so many series in flight?
>> 
>> To unblock you, if posting based on a snapshot is really not okay, here
>> are some other options I can think of:
>> 
>> a. Use [2] and posting a link to a WIP tree, similar to how [2] was
>>    done
>> b. Use some placeholder patches, assuming some interfaces to
>>    guest_memfd//HugeTLB, like how the first few patches in this series
>>    assumes some interfaces of guest_memfd with THP support, and post a
>>    series based on assumed interfaces
>> 
>> Please let me know if one of those options allow you to proceed, thanks!
> Do you see any issues with directly rebasing [2] onto 6.16.0-rc6?
>

Nope I think that should be fine. Thanks for checking!

> We currently prefer this approach. We have tested [2] for some time, and TDX
> huge page series doesn't rely on the implementation details of guest_memfd.
>
> It's ok if you are currently occupied by Google's internal tasks. No worries.
>
>> >> [2] https://github.com/googleprodkernel/linux-cc/commits/wip-tdx-gmem-conversions-hugetlb-2mept-v2
>> >> 
>> >> > However, if it's difficult for you, please feel free to let us know.
>> >> >
>> >> > Thanks
>> >> > Yan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ