[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBoc0MhlvO4hR03u@google.com>
Date: Tue, 6 May 2025 14:29:36 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Michael Larabel <Michael@...haellarabel.com>,
Borislav Petkov <bp@...en8.de>
Subject: Re: [PATCH v2] KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1
VM count transitions
On Tue, May 06, 2025 at 07:16:06AM -0700, Sean Christopherson wrote:
> On Tue, May 06, 2025, Yosry Ahmed wrote:
> > On Mon, May 05, 2025 at 11:03:00AM -0700, Sean Christopherson wrote:
> > > +static void svm_srso_vm_destroy(void)
> > > +{
> > > + if (!cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
> > > + return;
> > > +
> > > + if (atomic_dec_return(&srso_nr_vms))
> > > + return;
> > > +
> > > + guard(spinlock)(&srso_lock);
> > > +
> > > + /*
> > > + * Verify a new VM didn't come along, acquire the lock, and increment
> > > + * the count before this task acquired the lock.
> > > + */
> > > + if (atomic_read(&srso_nr_vms))
> > > + return;
> > > +
> > > + on_each_cpu(svm_srso_clear_bp_spec_reduce, NULL, 1);
> >
> > Just a passing-by comment. I get worried about sending IPIs while
> > holding a spinlock because if someone ever tries to hold that spinlock
> > with IRQs disabled, it may cause a deadlock.
> >
> > This is not the case for this lock, but it's not obvious (at least to
> > me) that holding it in a different code path that doesn't send IPIs with
> > IRQs disabled could cause a problem.
> >
> > You could add a comment, convert it to a mutex to make this scenario
> > impossible,
>
> Using a mutex doesn't make deadlock impossible, it's still perfectly legal to
> disable IRQs while holding a mutex.
Right, but it's illegal to hold a mutex while disabling IRQs. In this
case, if the other CPU is already holding the lock then there's no risk
of deadlock, right?
>
> Similarly, I don't want to add a comment, because there is absolutely nothing
> special/unique about this situation/lock. E.g. KVM has tens of calls to
> smp_call_function_many_cond() while holding a spinlock equivalent, in the form
> of kvm_make_all_cpus_request() while holding mmu_lock.
Agreed that it's not a unique situation at all. Ideally we'd have some
debugging (lockdep?) magic that identifies that an IPI is being sent
while a lock is held, and that this specific lock is never spinned on
with IRQs disabled.
>
> smp_call_function_many_cond() already asserts that IRQs are disabled, so I have
> zero concerns about this flow breaking in the future.
That doesn't really help tho, the problem is if another CPU spins on the
lock with IRQs disabled, regardless of whether or not it. Basically if
CPU 1 acquires the lock and sends an IPI while CPU 2 disables IRQs and
spins on the lock.
>
> > or dismiss my comment as being too paranoid/ridiculous :)
>
> I wouldn't say your thought process is too paranoid; when writing the code, I had
> to pause and think to remember whether or not using on_each_cpu() while holding a
> spinlock is allowed. But I do think the conclusion is wrong :-)
That's fair. I think protection against this should be done more
generically as I mentioned earlier, but it felt like it would be
easy-ish to side-step it in this case.
Powered by blists - more mailing lists