lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6afbee726c4d8d95c0d093874fb37e6ce7fd752a.camel@intel.com>
Date: Tue, 17 Jun 2025 15:52:48 +0000
From: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To: "Zhao, Yan Y" <yan.y.zhao@...el.com>
CC: "quic_eberman@...cinc.com" <quic_eberman@...cinc.com>, "Li, Xiaoyao"
	<xiaoyao.li@...el.com>, "Shutemov, Kirill" <kirill.shutemov@...el.com>,
	"Hansen, Dave" <dave.hansen@...el.com>, "david@...hat.com"
	<david@...hat.com>, "thomas.lendacky@....com" <thomas.lendacky@....com>,
	"vbabka@...e.cz" <vbabka@...e.cz>, "tabba@...gle.com" <tabba@...gle.com>,
	"Du, Fan" <fan.du@...el.com>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "seanjc@...gle.com" <seanjc@...gle.com>,
	"Weiny, Ira" <ira.weiny@...el.com>, "michael.roth@....com"
	<michael.roth@....com>, "pbonzini@...hat.com" <pbonzini@...hat.com>,
	"ackerleytng@...gle.com" <ackerleytng@...gle.com>, "Yamahata, Isaku"
	<isaku.yamahata@...el.com>, "binbin.wu@...ux.intel.com"
	<binbin.wu@...ux.intel.com>, "Peng, Chao P" <chao.p.peng@...el.com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Annapurve, Vishal"
	<vannapurve@...gle.com>, "jroedel@...e.de" <jroedel@...e.de>, "Miao, Jun"
	<jun.miao@...el.com>, "Li, Zhiquan1" <zhiquan1.li@...el.com>,
	"pgonda@...gle.com" <pgonda@...gle.com>, "x86@...nel.org" <x86@...nel.org>
Subject: Re: [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge
 pages

On Tue, 2025-06-17 at 09:38 +0800, Yan Zhao wrote:
> > We talked about doing something like having tdx_hold_page_on_error() in
> > guestmemfd with a proper name. The separation of concerns will be better if
> > we
> > can just tell guestmemfd, the page has an issue. Then guestmemfd can decide
> > how
> > to handle it (refcount or whatever).
> Instead of using tdx_hold_page_on_error(), the advantage of informing
> guest_memfd that TDX is holding a page at 4KB granularity is that, even if
> there
> is a bug in KVM (such as forgetting to notify TDX to remove a mapping in
> handle_removed_pt()), guest_memfd would be aware that the page remains mapped
> in
> the TDX module. This allows guest_memfd to determine how to handle the
> problematic page (whether through refcount adjustments or other methods)
> before
> truncating it.

I don't think a potential bug in KVM is a good enough reason. If we are
concerned can we think about a warning instead?

We had talked enhancing kasan to know when a page is mapped into S-EPT in the
past. So rather than design around potential bugs we could focus on having a
simpler implementation with the infrastructure to catch and fix the bugs.

> 
> > > 
> > > This would allow guest_memfd to maintain an internal reference count for
> > > each
> > > private GFN. TDX would call guest_memfd_add_page_ref_count() for mapping
> > > and
> > > guest_memfd_dec_page_ref_count() after a successful unmapping. Before
> > > truncating
> > > a private page from the filemap, guest_memfd could increase the real folio
> > > reference count based on its internal reference count for the private GFN.
> > 
> > What does this get us exactly? This is the argument to have less error prone
> > code that can survive forgetting to refcount on error? I don't see that it
> > is an
> > especially special case.
> Yes, for a less error prone code.
> 
> If this approach is considered too complex for an initial implementation,
> using
> tdx_hold_page_on_error() is also a viable option.

I'm saying I don't think it's not a good enough reason. Why is it different then
other use-after free bugs? I feel like I'm missing something.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ