[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aPtOtzGLigbY0Vqw@yzhao56-desk.sh.intel.com>
Date: Fri, 24 Oct 2025 18:02:31 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
Tianrui Zhao <zhaotianrui@...ngson.cn>, Bibo Mao <maobibo@...ngson.cn>,
Huacai Chen <chenhuacai@...nel.org>, Madhavan Srinivasan
<maddy@...ux.ibm.com>, Anup Patel <anup@...infault.org>, Paul Walmsley
<pjw@...nel.org>, Palmer Dabbelt <palmer@...belt.com>, Albert Ou
<aou@...s.berkeley.edu>, Christian Borntraeger <borntraeger@...ux.ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>, Claudio Imbrenda
<imbrenda@...ux.ibm.com>, Paolo Bonzini <pbonzini@...hat.com>, "Kirill A.
Shutemov" <kas@...nel.org>, <linux-arm-kernel@...ts.infradead.org>,
<kvmarm@...ts.linux.dev>, <kvm@...r.kernel.org>, <loongarch@...ts.linux.dev>,
<linux-mips@...r.kernel.org>, <linuxppc-dev@...ts.ozlabs.org>,
<kvm-riscv@...ts.infradead.org>, <linux-riscv@...ts.infradead.org>,
<x86@...nel.org>, <linux-coco@...ts.linux.dev>,
<linux-kernel@...r.kernel.org>, Ira Weiny <ira.weiny@...el.com>, Kai Huang
<kai.huang@...el.com>, Michael Roth <michael.roth@....com>, Vishal Annapurve
<vannapurve@...gle.com>, Rick Edgecombe <rick.p.edgecombe@...el.com>,
Ackerley Tng <ackerleytng@...gle.com>, Binbin Wu <binbin.wu@...ux.intel.com>
Subject: Re: [PATCH v3 24/25] KVM: TDX: Guard VM state transitions with "all"
the locks
On Thu, Oct 16, 2025 at 05:32:42PM -0700, Sean Christopherson wrote:
> Acquire kvm->lock, kvm->slots_lock, and all vcpu->mutex locks when
> servicing ioctls that (a) transition the TD to a new state, i.e. when
> doing INIT or FINALIZE or (b) are only valid if the TD is in a specific
> state, i.e. when initializing a vCPU or memory region. Acquiring "all"
> the locks fixes several KVM_BUG_ON() situations where a SEAMCALL can fail
> due to racing actions, e.g. if tdh_vp_create() contends with either
> tdh_mr_extend() or tdh_mr_finalize().
>
> For all intents and purposes, the paths in question are fully serialized,
> i.e. there's no reason to try and allow anything remotely interesting to
> happen. Smack 'em with a big hammer instead of trying to be "nice".
>
> Acquire kvm->lock to prevent VM-wide things from happening, slots_lock to
> prevent kvm_mmu_zap_all_fast(), and _all_ vCPU mutexes to prevent vCPUs
slots_lock to prevent kvm_mmu_zap_memslot()?
kvm_mmu_zap_all_fast() does not operate on the mirror root.
We may have missed a zap in the guest_memfd punch hole path:
The SEAMCALLs tdh_mem_range_block(), tdh_mem_track() tdh_mem_page_remove()
in the guest_memfd punch hole path are only protected by the filemap invaliate
lock and mmu_lock, so they could contend with v1 version of tdh_vp_init().
(I'm writing a selftest to verify this, haven't been able to reproduce
tdh_vp_init(v1) returning BUSY yet. However, this race condition should be
theoretically possible.)
Resources SHARED users EXCLUSIVE users
------------------------------------------------------------------------
(1) TDR tdh_mng_rdwr tdh_mng_create
tdh_vp_create tdh_mng_add_cx
tdh_vp_addcx tdh_mng_init
tdh_vp_init(v0) tdh_mng_vpflushdone
tdh_vp_enter tdh_mng_key_config
tdh_vp_flush tdh_mng_key_freeid
tdh_vp_rd_wr tdh_mr_extend
tdh_mem_sept_add tdh_mr_finalize
tdh_mem_sept_remove tdh_vp_init(v1)
tdh_mem_page_aug tdh_mem_page_add
tdh_mem_page_remove
tdh_mem_range_block
tdh_mem_track
tdh_mem_range_unblock
tdh_phymem_page_reclaim
Do you think we can acquire the mmu_lock for cmd KVM_TDX_INIT_VCPU?
> @@ -3155,12 +3198,13 @@ int tdx_vcpu_unlocked_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
> if (r)
> return r;
>
> + CLASS(tdx_vm_state_guard, guard)(kvm);
Should we move the guard to inside each cmd? Then there's no need to acquire the
locks in the default cases.
Powered by blists - more mailing lists