[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aQgHOcy28pSEfTZK@yzhao56-desk.sh.intel.com>
Date: Mon, 3 Nov 2025 09:36:57 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
Tianrui Zhao <zhaotianrui@...ngson.cn>, Bibo Mao <maobibo@...ngson.cn>,
Huacai Chen <chenhuacai@...nel.org>, Madhavan Srinivasan
<maddy@...ux.ibm.com>, Anup Patel <anup@...infault.org>, Paul Walmsley
<pjw@...nel.org>, Palmer Dabbelt <palmer@...belt.com>, Albert Ou
<aou@...s.berkeley.edu>, Christian Borntraeger <borntraeger@...ux.ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>, Claudio Imbrenda
<imbrenda@...ux.ibm.com>, Paolo Bonzini <pbonzini@...hat.com>, "Kirill A.
Shutemov" <kas@...nel.org>, <linux-arm-kernel@...ts.infradead.org>,
<kvmarm@...ts.linux.dev>, <kvm@...r.kernel.org>, <loongarch@...ts.linux.dev>,
<linux-mips@...r.kernel.org>, <linuxppc-dev@...ts.ozlabs.org>,
<kvm-riscv@...ts.infradead.org>, <linux-riscv@...ts.infradead.org>,
<x86@...nel.org>, <linux-coco@...ts.linux.dev>,
<linux-kernel@...r.kernel.org>, Ira Weiny <ira.weiny@...el.com>, Kai Huang
<kai.huang@...el.com>, Binbin Wu <binbin.wu@...ux.intel.com>, Michael Roth
<michael.roth@....com>, Vishal Annapurve <vannapurve@...gle.com>, "Rick
Edgecombe" <rick.p.edgecombe@...el.com>, Ackerley Tng
<ackerleytng@...gle.com>
Subject: Re: [PATCH v4 26/28] KVM: TDX: Guard VM state transitions with "all"
the locks
On Fri, Oct 31, 2025 at 10:34:51AM -0700, Sean Christopherson wrote:
> On Fri, Oct 31, 2025, Yan Zhao wrote:
> > On Thu, Oct 30, 2025 at 01:09:49PM -0700, Sean Christopherson wrote:
> > > Acquire kvm->lock, kvm->slots_lock, and all vcpu->mutex locks when
> > > servicing ioctls that (a) transition the TD to a new state, i.e. when
> > > doing INIT or FINALIZE or (b) are only valid if the TD is in a specific
> > > state, i.e. when initializing a vCPU or memory region. Acquiring "all"
> > > the locks fixes several KVM_BUG_ON() situations where a SEAMCALL can fail
> > > due to racing actions, e.g. if tdh_vp_create() contends with either
> > > tdh_mr_extend() or tdh_mr_finalize().
> > >
> > > For all intents and purposes, the paths in question are fully serialized,
> > > i.e. there's no reason to try and allow anything remotely interesting to
> > > happen. Smack 'em with a big hammer instead of trying to be "nice".
> > >
> > > Acquire kvm->lock to prevent VM-wide things from happening, slots_lock to
> > > prevent kvm_mmu_zap_all_fast(), and _all_ vCPU mutexes to prevent vCPUs
> > s/kvm_mmu_zap_all_fast/kvm_mmu_zap_memslot
>
> Argh! Third time's a charm? Hopefully...
>
> > > @@ -3170,7 +3208,8 @@ static int tdx_vcpu_init_mem_region(struct kvm_vcpu *vcpu, struct kvm_tdx_cmd *c
> > >
> > > int tdx_vcpu_unlocked_ioctl(struct kvm_vcpu *vcpu, void __user *argp)
> > > {
> > > - struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
> > > + struct kvm *kvm = vcpu->kvm;
> > > + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> > reverse xmas tree ?
>
> No, because the shorter line generates an input to the longer line. E.g. we could
> do this if we really, really want an xmas tree:
>
> struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm);
> struct kvm *kvm = vcpu->kvm;
>
> but this won't compile
>
> struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
> struct kvm *kvm = vcpu->kvm;
Ah! Sorry. My attention was caught by the line length, completely missing the
dependency :(
Powered by blists - more mailing lists