[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2d1f56370b644621b4e3bb8c0c47590e@baidu.com>
Date: Mon, 12 May 2025 05:03:17 +0000
From: "Li,Rongqing" <lirongqing@...du.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: "pbonzini@...hat.com" <pbonzini@...hat.com>, "kvm@...r.kernel.org"
<kvm@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "Li,Zhaoxin(ACG CCN)" <lizhaoxin04@...du.com>
Subject: 答复: [????] Re: ??: [????] Re: [PATCH] KVM: Use call_rcu() in kvm_io_bus_register_dev
> Ah, so this isn't about device creation from userspace, rather it's about reacting
> to the guest's configuration of a device, e.g. to register doorbells when the
> guest instantiates queues for a device?
>
Yes, the ioeventfds are registered when guest instantiates queues
> > can ioeventfd uses call_srcu?
>
> No, because that has the same problem of KVM not ensuring vCPUs will observe
> the the change before returning to userspace.
>
> Unfortunately, I don't see an easy solution. At a glance, every architecture
> except arm64 could switch to protect kvm->buses with a rwlock, but arm64 uses
> the MMIO bus for the vGIC's ITS, and I don't think it's feasible to make the ITS
> stuff play nice with a rwlock. E.g. vgic_its.its_lock and vgic_its.cmd_lock are
> mutexes, and there are multiple ITS paths that access guest memory, i.e. might
> sleep due to faulting.
>
> Even if we did something x86-centric, e.g. futher special case
> KVM_FAST_MMIO_BUS with a rwlock, I worry that using a rwlock would
> degrade steady state performance, e.g. due to cross-CPU atomic accesses.
>
> Does using a dedicated SRCU structure resolve the issue? E.g. add and use
> kvm->buses_srcu instead of kvm->srcu? x86's usage of the MMIO/PIO buses
> kvm->is
> limited to kvm_io_bus_{read,write}(), so it should be easy enough to do a super
> quick and dirty PoC.
Could you write a patch, we can test it
Thanks
-Li
Powered by blists - more mailing lists