[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86v7guar0g.wl-maz@kernel.org>
Date: Thu, 22 Jan 2026 10:19:43 +0000
From: Marc Zyngier <maz@...nel.org>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Thomas Gleixner <tglx@...nel.org>,
Ankit Soni <Ankit.Soni@....com>,
Sean Christopherson <seanjc@...gle.com>,
Oliver Upton <oliver.upton@...ux.dev>,
Joerg Roedel <joro@...tes.org>,
David Woodhouse <dwmw2@...radead.org>,
Lu Baolu <baolu.lu@...ux.intel.com>,
linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.linux.dev,
kvm@...r.kernel.org,
iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org,
Sairaj Kodilkar <sarunkod@....com>,
Vasant Hegde <vasant.hegde@....com>,
Maxim Levitsky <mlevitsk@...hat.com>,
Joao Martins <joao.m.martins@...cle.com>,
Francesco Lavra <francescolavra.fl@...il.com>,
David Matlack <dmatlack@...gle.com>,
Naveen Rao <Naveen.Rao@....com>,
Crystal Wood <crwood@...hat.com>
Subject: Re: possible deadlock due to irq_set_thread_affinity() calling into the scheduler (was Re: [PATCH v3 38/62] KVM: SVM: Take and hold ir_list_lock across IRTE updates in IOMMU)
On Wed, 21 Jan 2026 18:13:43 +0000,
Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> Sorry, not sure how the previous email ended up encrypted.
>
> On 1/8/26 22:53, Thomas Gleixner wrote:
> > On Thu, Jan 08 2026 at 22:28, Thomas Gleixner wrote:
> >> On Mon, Dec 22 2025 at 15:09, Paolo Bonzini wrote:
> >>> Of the three, the most sketchy is (a); notably, __setup_irq() calls
> >>> wake_up_process outside desc->lock. Therefore I'd like so much to treat
> >>> it as a kernel/irq/ bug; and the simplest (perhaps too simple...) fix is
> >>
> >> It's not more sketchy than VIRT assuming that it can do what it wants
> >> under rq->lock. 🙂
> >
> > And just for the record, that's not the only place in the irq core which
> > has that lock chain.
> >
> > irq_set_affinity_locked() // invoked with desc::lock held
> > if (desc->affinity_notify)
> > schedule_work() // Ends up taking rq::lock
> >
> > and that's the case since cd7eab44e994 ("genirq: Add IRQ affinity
> > notifiers"), which was added 15 years ago.
> >
> > Are you still claiming that this is a kernel/irq bug?
>
> Not really, I did say I'd like to treat it as a kernel/irq bug...
> but certainly didn't have hopes high enough to "claim" that.
> I do think that it's ugly to have locks that are internal,
> non-leaf and held around callbacks; but people smarter than
> me have thought about it and you can't call it a bug anyway.
>
> For x86/AMD we have a way to fix it, so that part is not a problem.
>
> For the call(*) to irq_set_affinity() in arch/arm64/kvm/'s
> vgic_v4_load() I think it can be solved as well.
> kvm_make_request(KVM_REQ_RELOAD_GICv4) will delay vgic_v4_load()
> to a safe spot, so just cache the previous smp_processor_id() and,
> if it is different, do the kvm_make_request() and return instead
> of calling irq_set_affinity().
>
> vgic_v3_load() is the only place that calls it from the preempt
> notifier, so this behavior can be tied to a "bool delay_set_affinity"
> argument to vgic_v4_load() or placed in a different function.
>
> Marc/Oliver, does that sound doable?
Potentially. But there are a few gotchas that may need surgery beyond
KVM itself, all the way down to the ITS code that abstract the
differences between v4.0 and v4.1.
I'll have a look over the weekend.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists