lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <flitmw3b2imbpbyhygaushwt4h6k3fa5l2ebdfkl7knw5ghvod@ioqe2bsvq6vz>
Date: Tue, 23 Dec 2025 08:59:15 +0000
From: Ankit Soni <Ankit.Soni@....com>
To: Paolo Bonzini <pbonzini@...hat.com>
CC: Sean Christopherson <seanjc@...gle.com>, Marc Zyngier <maz@...nel.org>,
	Oliver Upton <oliver.upton@...ux.dev>, Joerg Roedel <joro@...tes.org>, "David
 Woodhouse" <dwmw2@...radead.org>, Lu Baolu <baolu.lu@...ux.intel.com>,
	<linux-arm-kernel@...ts.infradead.org>, <kvmarm@...ts.linux.dev>,
	<kvm@...r.kernel.org>, <iommu@...ts.linux.dev>,
	<linux-kernel@...r.kernel.org>, Sairaj Kodilkar <sarunkod@....com>, "Vasant
 Hegde" <vasant.hegde@....com>, Maxim Levitsky <mlevitsk@...hat.com>, "Joao
 Martins" <joao.m.martins@...cle.com>, Francesco Lavra
	<francescolavra.fl@...il.com>, David Matlack <dmatlack@...gle.com>, "Naveen
 Rao" <Naveen.Rao@....com>
Subject: Re: possible deadlock due to irq_set_thread_affinity() calling into
 the scheduler (was Re: [PATCH v3 38/62] KVM: SVM: Take and hold ir_list_lock
 across IRTE updates in IOMMU)

On Mon, Dec 22, 2025 at 03:09:13PM +0100, Paolo Bonzini wrote:
> On 12/22/25 10:16, Ankit Soni wrote:
> >    ======================================================
> >    WARNING: possible circular locking dependency detected
> >    6.19.0-rc2 #20 Tainted: G            E
> >    ------------------------------------------------------
> >    CPU 58/KVM/28597 is trying to acquire lock:
> >      ff12c47d4b1f34c0 (&irq_desc_lock_class){-.-.}-{2:2}, at: __irq_get_desc_lock+0x58/0xa0
> > 
> >      but task is already holding lock:
> >      ff12c49b28552110 (&svm->ir_list_lock){....}-{2:2}, at: avic_pi_update_irte+0x147/0x270 [kvm_amd]
> > 
> >      which lock already depends on the new lock.
> > 
> >    Chain exists of:
> >      &irq_desc_lock_class --> &rq->__lock --> &svm->ir_list_lock
> > 
> >    Possible unsafe locking scenario:
> > 
> >          CPU0                            CPU1
> >          ----                            ----
> >     lock(&svm->ir_list_lock);
> >                                        lock(&rq->__lock);
> >                                        lock(&svm->ir_list_lock);
> >     lock(&irq_desc_lock_class);
> > 
> >          *** DEADLOCK ***
> > 
> > So lockdep sees:
> > 
> >    &irq_desc_lock_class -> &rq->__lock -> &svm->ir_list_lock
> > 
> > while avic_pi_update_irte() currently holds svm->ir_list_lock and then
> > takes irq_desc_lock via irq_set_vcpu_affinity(), which creates the
> > potential inversion.
> > 
> >    - Is this lockdep warning expected/benign in this code path, or does it
> >      indicate a real potential deadlock between svm->ir_list_lock and
> >      irq_desc_lock with AVIC + irq_bypass + VFIO?
> 
> I'd treat it as a potential (if unlikely) deadlock:
> 
> (a) irq_set_thread_affinity triggers the scheduler via wake_up_process,
> while irq_desc->lock is taken
> 
> (b) the scheduler calls into KVM with rq_lock taken, and KVM uses
> ir_list_lock within __avic_vcpu_load/__avic_vcpu_put
> 
> (c) KVM wants to block scheduling for a while and uses ir_list_lock for
> that purpose, but then takes irq_set_vcpu_affinity takes irq_desc->lock.
> 
> I don't think there's an alternative choice of lock for (c); and there's
> no easy way to pull the irq_desc->lock out of the IRQ subsystem--in fact
> the stickiness of the situation comes from rq->rq_lock and
> irq_desc->lock being both internal and not leaf.
> 
> Of the three, the most sketchy is (a); notably, __setup_irq() calls
> wake_up_process outside desc->lock.  Therefore I'd like so much to treat
> it as a kernel/irq/ bug; and the simplest (perhaps too simple...) fix is
> to drop the wake_up_process().  The only cost is extra latency on the
> next interrupt after an affinity change.
> 
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 8b1b4c8a4f54..fc135bd079a4 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -189,14 +189,10 @@ static void irq_set_thread_affinity(struct irq_desc *desc)
>  	struct irqaction *action;
>  	for_each_action_of_desc(desc, action) {
> -		if (action->thread) {
> +		if (action->thread)
>  			set_bit(IRQTF_AFFINITY, &action->thread_flags);
> -			wake_up_process(action->thread);
> -		}
> -		if (action->secondary && action->secondary->thread) {
> +		if (action->secondary && action->secondary->thread)
>  			set_bit(IRQTF_AFFINITY, &action->secondary->thread_flags);
> -			wake_up_process(action->secondary->thread);
> -		}
>  	}
>  }
> Marc, what do you think?
> 
> Paolo
> 

Hi Paolo,
With the above patch I’m still seeing the same circular dependency warning.
However, and with Sean’s patch I’m not seeing any warnings.

[  335.128640] ======================================================
[  335.128650] WARNING: possible circular locking dependency detected
[  335.128660] 6.19.0-rc2-dirty #21 Tainted: G            E
[  335.128668] ------------------------------------------------------
[  335.128675] CPU 96/KVM/28699 is trying to acquire lock:
[  335.128682] ff453c6b59bb18c0 (&irq_desc_lock_class){-.-.}-{2:2}, at: __irq_get_desc_lock+0x58/0xa0
[  335.128704]
               but task is already holding lock:
[  335.128712] ff453c8b0946e600 (&svm->ir_list_lock){....}-{2:2}, at: avic_pi_update_irte+0x147/0x270 [kvm_amd]
[  335.128732]
               which lock already depends on the new lock.

[  335.128742]
               the existing dependency chain (in reverse order) is:
[  335.128751]
               -> #4 (&svm->ir_list_lock){....}-{2:2}:
[  335.128760]        _raw_spin_lock_irqsave+0x4e/0xb0
[  335.128772]        __avic_vcpu_put+0x7a/0x150 [kvm_amd]
[  335.128783]        avic_vcpu_put+0x50/0x70 [kvm_amd]
[  335.128791]        svm_vcpu_put+0x38/0x70 [kvm_amd]
[  335.128800]        kvm_arch_vcpu_put+0x21b/0x330 [kvm]
[  335.128854]        kvm_sched_out+0x62/0x90 [kvm]
[  335.128893]        __schedule+0x8d3/0x1d10
[  335.128901]        __cond_resched+0x5c/0x80
[  335.128909]        __kmalloc_cache_noprof+0x3d7/0x730
[  335.128920]        kvm_hv_vcpu_init+0x48/0x260 [kvm]
[  335.128957]        kvm_hv_set_msr_common+0x5b/0x12a0 [kvm]
[  335.128988]        kvm_set_msr_common+0x468/0x1310 [kvm]
[  335.129019]        svm_set_msr+0x645/0x730 [kvm_amd]
[  335.129028]        __kvm_set_msr+0xa3/0x2f0 [kvm]
[  335.129066]        kvm_set_msr_ignored_check+0x23/0x1b0 [kvm]
[  335.129096]        do_set_msr+0x76/0xd0 [kvm]
[  335.129126]        msr_io+0xbe/0x1c0 [kvm]
[  335.129152]        kvm_arch_vcpu_ioctl+0x700/0x2090 [kvm]
[  335.129181]        kvm_vcpu_ioctl+0x632/0xc60 [kvm]
[  335.129215]        __x64_sys_ioctl+0xa5/0x100
[  335.129224]        x64_sys_call+0x1243/0x26b0
[  335.129234]        do_syscall_64+0x93/0x1470
[  335.129242]        entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  335.129251]
               -> #3 (&rq->__lock){-.-.}-{2:2}:
[  335.129259]        _raw_spin_lock_nested+0x32/0x80
[  335.129267]        raw_spin_rq_lock_nested+0x22/0xa0
[  335.129276]        task_rq_lock+0x5f/0x150
[  335.129392]        cgroup_move_task+0x46/0x110
[  335.129494]        css_set_move_task+0xe1/0x240
[  335.129614]        cgroup_post_fork+0x98/0x2d0
[  335.129704]        copy_process+0x1ea8/0x2330
[  335.129795]        kernel_clone+0xa7/0x440
[  335.129883]        user_mode_thread+0x63/0x90
[  335.129970]        rest_init+0x28/0x200
[  335.130056]        start_kernel+0xae0/0xcd0
[  335.130144]        x86_64_start_reservations+0x18/0x30
[  335.130230]        x86_64_start_kernel+0xfd/0x150
[  335.130312]        common_startup_64+0x13e/0x141
[  335.130396]
               -> #2 (&p->pi_lock){-.-.}-{2:2}:
[  335.130554]        _raw_spin_lock_irqsave+0x4e/0xb0
[  335.130631]        try_to_wake_up+0x59/0xaa0
[  335.130709]        wake_up_process+0x15/0x30
[  335.130785]        create_worker+0x154/0x250
[  335.130862]        workqueue_init+0x414/0x760
[  335.130938]        kernel_init_freeable+0x24f/0x630
[  335.131013]        kernel_init+0x1b/0x200
[  335.131086]        ret_from_fork+0x344/0x3a0
[  335.131161]        ret_from_fork_asm+0x1a/0x30
[  335.131235]
               -> #1 (&pool->lock){-.-.}-{2:2}:
[  335.131375]        _raw_spin_lock+0x34/0x80
[  335.131448]        __queue_work+0xf4/0x740
[  335.131523]        queue_work_on+0x70/0xd0
[  335.131595]        irq_set_affinity_locked+0x13b/0x250
[  335.131669]        __irq_apply_affinity_hint+0xf8/0x120
[  335.131742]        mlx5_irq_alloc+0x28e/0x4e0 [mlx5_core]
[  335.131884]        mlx5_irq_request+0x125/0x140 [mlx5_core]
[  335.131997]        mlx5_irq_request_vector+0xb4/0x110 [mlx5_core]
[  335.132107]        comp_irq_request_pci+0x68/0xf0 [mlx5_core]
[  335.132219]        mlx5_comp_eqn_get+0x127/0x850 [mlx5_core]
[  335.132327]        mlx5e_create_cq+0x58/0x260 [mlx5_core]
[  335.132452]        mlx5e_open_drop_rq+0x11c/0x220 [mlx5_core]
[  335.132573]        mlx5e_init_nic_rx+0x2c/0x270 [mlx5_core]
[  335.132687]        mlx5e_attach_netdev+0xed/0x340 [mlx5_core]
[  335.132794]        _mlx5e_resume+0x6a/0xd0 [mlx5_core]
[  335.132900]        mlx5e_probe+0x5e3/0xa50 [mlx5_core]
[  335.133010]        auxiliary_bus_probe+0x45/0x90
[  335.133086]        really_probe+0xf1/0x410
[  335.133160]        __driver_probe_device+0x8c/0x190
[  335.133233]        driver_probe_device+0x24/0xd0
[  335.133305]        __device_attach_driver+0xcd/0x170
[  335.133377]        bus_for_each_drv+0x99/0x100
[  335.133450]        __device_attach+0xba/0x1f0
[  335.133523]        device_initial_probe+0x4e/0x50
[  335.133595]        bus_probe_device+0x3c/0xa0
[  335.133668]        device_add+0x6af/0x8a0
[  335.133740]        __auxiliary_device_add+0x43/0xc0
[  335.133813]        add_adev+0xd3/0x160 [mlx5_core]
[  335.133931]        mlx5_rescan_drivers_locked+0x1ee/0x340 [mlx5_core]
[  335.134045]        mlx5_register_device+0x37/0xb0 [mlx5_core]
[  335.134156]        mlx5_init_one_devl_locked+0x43e/0x710 [mlx5_core]
[  335.134267]        probe_one+0x35b/0x530 [mlx5_core]
[  335.134382]        local_pci_probe+0x47/0xb0
[  335.134460]        work_for_cpu_fn+0x1a/0x30
[  335.134538]        process_one_work+0x22b/0x6f0
[  335.134613]        worker_thread+0x1c6/0x3b0
[  335.134687]        kthread+0x110/0x230
[  335.134764]        ret_from_fork+0x344/0x3a0
[  335.134838]        ret_from_fork_asm+0x1a/0x30
[  335.134913]
               -> #0 (&irq_desc_lock_class){-.-.}-{2:2}:
[  335.135059]        __lock_acquire+0x1595/0x2640
[  335.135135]        lock_acquire+0xc4/0x2c0
[  335.135210]        _raw_spin_lock_irqsave+0x4e/0xb0
[  335.135286]        __irq_get_desc_lock+0x58/0xa0
[  335.135362]        irq_set_vcpu_affinity+0x4a/0x100
[  335.135438]        avic_pi_update_irte+0x170/0x270 [kvm_amd]
[  335.135521]        kvm_pi_update_irte+0xea/0x220 [kvm]
[  335.135633]        kvm_arch_irq_bypass_add_producer+0x9b/0xb0 [kvm]
[  335.135737]        __connect+0x5f/0x100 [irqbypass]
[  335.135815]        irq_bypass_register_producer+0xe4/0xb90 [irqbypass]
[  335.135895]        vfio_msi_set_vector_signal+0x1b0/0x330 [vfio_pci_core]
[  335.135979]        vfio_msi_set_block+0x5a/0xd0 [vfio_pci_core]
[  335.136060]        vfio_pci_set_msi_trigger+0x19e/0x260 [vfio_pci_core]
[  335.136142]        vfio_pci_set_irqs_ioctl+0x46/0x140 [vfio_pci_core]
[  335.136224]        vfio_pci_core_ioctl+0x6ea/0xc20 [vfio_pci_core]
[  335.136306]        vfio_device_fops_unl_ioctl+0xb1/0x9d0 [vfio]
[  335.136390]        __x64_sys_ioctl+0xa5/0x100
[  335.136470]        x64_sys_call+0x1243/0x26b0
[  335.136553]        do_syscall_64+0x93/0x1470
[  335.136632]        entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  335.136712]
               other info that might help us debug this:

[  335.136942] Chain exists of:
                 &irq_desc_lock_class --> &rq->__lock --> &svm->ir_list_lock

[  335.137178]  Possible unsafe locking scenario:

[  335.137340]        CPU0                    CPU1
[  335.137420]        ----                    ----
[  335.137502]   lock(&svm->ir_list_lock);
[  335.137582]                                lock(&rq->__lock);
[  335.137664]                                lock(&svm->ir_list_lock);
[  335.137746]   lock(&irq_desc_lock_class);
[  335.137826]
                *** DEADLOCK ***

[  335.138057] 4 locks held by CPU 96/KVM/28699:
[  335.138136]  #0: ff453c6c28bae800 (&vdev->igate){+.+.}-{4:4}, at: vfio_pci_core_ioctl+0x6d2/0xc20 [vfio_pci_core]
[  335.138223]  #1: ffffffffc0faf110 (lock#10){+.+.}-{4:4}, at: irq_bypass_register_producer+0x31/0xb90 [irqbypass]
[  335.138310]  #2: ff453c8b11179478 (&kvm->irqfds.lock){....}-{3:3}, at: kvm_arch_irq_bypass_add_producer+0x2d/0xb0 [kvm]
[  335.138423]  #3: ff453c8b0946e600 (&svm->ir_list_lock){....}-{2:2}, at: avic_pi_update_irte+0x147/0x270 [kvm_amd]

-Ankit

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ