[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aC1221wU6Mby3Lo3@yzhao56-desk.sh.intel.com>
Date: Wed, 21 May 2025 14:46:51 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Ackerley Tng <ackerleytng@...gle.com>
CC: <michael.roth@....com>, <kvm@...r.kernel.org>,
<linux-coco@...ts.linux.dev>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <jroedel@...e.de>, <thomas.lendacky@....com>,
<pbonzini@...hat.com>, <seanjc@...gle.com>, <vbabka@...e.cz>,
<amit.shah@....com>, <pratikrajesh.sampat@....com>, <ashish.kalra@....com>,
<liam.merwick@...cle.com>, <david@...hat.com>, <vannapurve@...gle.com>,
<quic_eberman@...cinc.com>
Subject: Re: [PATCH 3/5] KVM: gmem: Hold filemap invalidate lock while
allocating/preparing folios
On Mon, May 19, 2025 at 10:04:45AM -0700, Ackerley Tng wrote:
> Ackerley Tng <ackerleytng@...gle.com> writes:
>
> > Yan Zhao <yan.y.zhao@...el.com> writes:
> >
> >> On Fri, Mar 14, 2025 at 05:20:21PM +0800, Yan Zhao wrote:
> >>> This patch would cause host deadlock when booting up a TDX VM even if huge page
> >>> is turned off. I currently reverted this patch. No further debug yet.
> >> This is because kvm_gmem_populate() takes filemap invalidation lock, and for
> >> TDX, kvm_gmem_populate() further invokes kvm_gmem_get_pfn(), causing deadlock.
> >>
> >> kvm_gmem_populate
> >> filemap_invalidate_lock
> >> post_populate
> >> tdx_gmem_post_populate
> >> kvm_tdp_map_page
> >> kvm_mmu_do_page_fault
> >> kvm_tdp_page_fault
> >> kvm_tdp_mmu_page_fault
> >> kvm_mmu_faultin_pfn
> >> __kvm_mmu_faultin_pfn
> >> kvm_mmu_faultin_pfn_private
> >> kvm_gmem_get_pfn
> >> filemap_invalidate_lock_shared
> >>
> >> Though, kvm_gmem_populate() is able to take shared filemap invalidation lock,
> >> (then no deadlock), lockdep would still warn "Possible unsafe locking scenario:
> >> ...DEADLOCK" due to the recursive shared lock, since commit e918188611f0
> >> ("locking: More accurate annotations for read_lock()").
> >>
> >
> > Thank you for investigating. This should be fixed in the next revision.
> >
>
> This was not fixed in v2 [1], I misunderstood this locking issue.
>
> IIUC kvm_gmem_populate() gets a pfn via __kvm_gmem_get_pfn(), then calls
> part of the KVM fault handler to map the pfn into secure EPTs, then
> calls the TDX module for the copy+encrypt.
>
> Regarding this lock, seems like KVM'S MMU lock is already held while TDX
> does the copy+encrypt. Why must the filemap_invalidate_lock() also be
> held throughout the process?
If kvm_gmem_populate() does not hold filemap invalidate lock around all
requested pages, what value should it return after kvm_gmem_punch_hole() zaps a
mapping it just successfully installed?
TDX currently only holds the read kvm->mmu_lock in tdx_gmem_post_populate() when
CONFIG_KVM_PROVE_MMU is enabled, due to both slots_lock and the filemap
invalidate lock being taken in kvm_gmem_populate().
Looks sev_gmem_post_populate() does not take kvm->mmu_lock either.
I think kvm_gmem_populate() needs to hold the filemap invalidate lock at least
around each __kvm_gmem_get_pfn(), post_populate() and kvm_gmem_mark_prepared().
> If we don't have to hold the filemap_invalidate_lock() throughout,
>
> 1. Would it be possible to call kvm_gmem_get_pfn() to get the pfn
> instead of calling __kvm_gmem_get_pfn() and managing the lock in a
> loop?
>
> 2. Would it be possible to trigger the kvm fault path from
> kvm_gmem_populate() so that we don't rebuild the get_pfn+mapping
> logic and reuse the entire faulting code? That way the
> filemap_invalidate_lock() will only be held while getting a pfn.
The kvm fault path is invoked in TDX's post_populate() callback.
I don't find a good way to move it to kvm_gmem_populate().
> [1] https://lore.kernel.org/all/cover.1747264138.git.ackerleytng@google.com/T/
>
> >>> > @@ -819,12 +827,16 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> >>> > pgoff_t index = kvm_gmem_get_index(slot, gfn);
> >>> > struct file *file = kvm_gmem_get_file(slot);
> >>> > int max_order_local;
> >>> > + struct address_space *mapping;
> >>> > struct folio *folio;
> >>> > int r = 0;
> >>> >
> >>> > if (!file)
> >>> > return -EFAULT;
> >>> >
> >>> > + mapping = file->f_inode->i_mapping;
> >>> > + filemap_invalidate_lock_shared(mapping);
> >>> > +
> >>> > /*
> >>> > * The caller might pass a NULL 'max_order', but internally this
> >>> > * function needs to be aware of any order limitations set by
> >>> > @@ -838,6 +850,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> >>> > folio = __kvm_gmem_get_pfn(file, slot, index, pfn, &max_order_local);
> >>> > if (IS_ERR(folio)) {
> >>> > r = PTR_ERR(folio);
> >>> > + filemap_invalidate_unlock_shared(mapping);
> >>> > goto out;
> >>> > }
> >>> >
> >>> > @@ -845,6 +858,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> >>> > r = kvm_gmem_prepare_folio(kvm, file, slot, gfn, folio, max_order_local);
> >>> >
> >>> > folio_unlock(folio);
> >>> > + filemap_invalidate_unlock_shared(mapping);
> >>> >
> >>> > if (!r)
> >>> > *page = folio_file_page(folio, index);
> >>> > --
> >>> > 2.25.1
> >>> >
> >>> >
>
Powered by blists - more mailing lists