[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240903182555.1253466-1-kheib@redhat.com>
Date: Tue, 3 Sep 2024 14:25:55 -0400
From: Kamal Heib <kheib@...hat.com>
To: intel-wired-lan@...ts.osuosl.org
Cc: netdev@...r.kernel.org,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Ivan Vecera <ivecera@...hat.com>,
Michal Schmidt <mschmidt@...hat.com>,
Jakub Kicinski <kuba@...nel.org>,
"David S . Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>,
Kamal Heib <kheib@...hat.com>,
YangHang Liu <yanghliu@...hat.com>
Subject: [PATCH iwl-net] i40e: Fix trying to free already-freed IRQ
Avoid the following warning when trying to free an already freed IRQ,
The issue happens when trying to call i40e_remove() twice from two
different contexts which will lead to calling i40e_vsi_free_irq() twice,
Fix the issue by using a flag to mark that the IRQ has already been freed.
i40e 0000:07:00.0: i40e_ptp_stop: removed PHC on enp7s0
------------[ cut here ]------------
Trying to free already-free IRQ 0
WARNING: CPU: 2 PID: 12 at kernel/irq/manage.c:1868 __free_irq+0x1e3/0x350
Modules linked in: nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 rfkill ip_set nf_tables nfnetlink vfat fat intel_rapl_msr intel_rapl_common kvm_amd ccp iTCO_wdt iTCO_vendor_support kvm i2c_i801 pcspkr i40e lpc_ich virtio_gpu i2c_smbus virtio_dma_buf drm_shmem_helper drm_kms_helper virtio_balloon joydev drm fuse xfs libcrc32c ahci crct10dif_pclmul libahci crc32_pclmul crc32c_intel virtio_net libata virtio_blk ghash_clmulni_intel net_failover virtio_console failover serio_raw dm_mirror dm_region_hash dm_log dm_mod
CPU: 2 PID: 12 Comm: kworker/u16:1 Kdump: loaded Not tainted 5.14.0-478.el9.x86_64 #1
Hardware name: Red Hat KVM/RHEL, BIOS edk2-20240524-1.el9 05/24/2024
Workqueue: kacpi_hotplug acpi_hotplug_work_fn
RIP: 0010:__free_irq+0x1e3/0x350
Code: 00 00 48 8b bb a8 01 00 00 e8 09 74 02 00 49 8b 7c 24 30 e8 8f 7c 1d 00 eb 35 8b 74 24 04 48 c7 c7 50 a3 61 92 e8 cd 99 f6 ff <0f> 0b 4c 89 fe 48 89 ef e8 30 aa b3 00 48 8b 43 40 48 8b 40 78 48
RSP: 0018:ffffb971c0077ac8 EFLAGS: 00010086
RAX: 0000000000000000 RBX: ffff8b594193ee00 RCX: 0000000000000027
RDX: 0000000000000027 RSI: 00000000ffff7fff RDI: ffff8b59bcf208c8
RBP: ffff8b594193eec4 R08: 0000000000000000 R09: ffffb971c0077970
R10: ffffb971c0077968 R11: ffffffff931e7c28 R12: ffff8b5944946000
R13: ffff8b594193ef80 R14: ffff8b5944946000 R15: 0000000000000246
FS: 0000000000000000(0000) GS:ffff8b59bcf00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f11eb064000 CR3: 000000000ad40004 CR4: 0000000000770ef0
PKRU: 55555554
Call Trace:
<TASK>
? srso_alias_return_thunk+0x5/0xfbef5
? show_trace_log_lvl+0x26e/0x2df
? show_trace_log_lvl+0x26e/0x2df
? free_irq+0x33/0x70
? __free_irq+0x1e3/0x350
? __warn+0x7e/0xd0
? __free_irq+0x1e3/0x350
? report_bug+0x100/0x140
? srso_alias_return_thunk+0x5/0xfbef5
? handle_bug+0x3c/0x70
? exc_invalid_op+0x14/0x70
? asm_exc_invalid_op+0x16/0x20
? __free_irq+0x1e3/0x350
? __free_irq+0x1e3/0x350
free_irq+0x33/0x70
i40e_vsi_free_irq+0x19e/0x220 [i40e]
i40e_vsi_close+0x2b/0xc0 [i40e]
i40e_close+0x11/0x20 [i40e]
__dev_close_many+0x9e/0x110
dev_close_many+0x8b/0x140
? srso_alias_return_thunk+0x5/0xfbef5
? free_pcppages_bulk+0xee/0x290
unregister_netdevice_many_notify+0x162/0x690
? srso_alias_return_thunk+0x5/0xfbef5
? free_unref_page_commit+0x19a/0x310
unregister_netdevice_queue+0xd3/0x110
unregister_netdev+0x18/0x20
i40e_vsi_release+0x84/0x2e0 [i40e]
? srso_alias_return_thunk+0x5/0xfbef5
i40e_remove+0x15c/0x430 [i40e]
pci_device_remove+0x3e/0xb0
device_release_driver_internal+0x193/0x200
pci_stop_bus_device+0x6c/0x90
pci_stop_and_remove_bus_device+0xe/0x20
disable_slot+0x49/0x90
acpiphp_disable_and_eject_slot+0x15/0x90
hotplug_event+0xea/0x210
? __pfx_acpiphp_hotplug_notify+0x10/0x10
acpiphp_hotplug_notify+0x22/0x80
? __pfx_acpiphp_hotplug_notify+0x10/0x10
acpi_device_hotplug+0xb8/0x210
acpi_hotplug_work_fn+0x1a/0x30
process_one_work+0x197/0x380
worker_thread+0x2fe/0x410
? __pfx_worker_thread+0x10/0x10
kthread+0xe0/0x100
? __pfx_kthread+0x10/0x10
ret_from_fork+0x2c/0x50
</TASK>
---[ end trace 0000000000000000 ]---
Fixes: 41c445ff0f48 ("i40e: main driver core")
Tested-by: YangHang Liu <yanghliu@...hat.com>
Signed-off-by: Kamal Heib <kheib@...hat.com>
---
drivers/net/ethernet/intel/i40e/i40e.h | 1 +
drivers/net/ethernet/intel/i40e/i40e_main.c | 8 ++++++++
2 files changed, 9 insertions(+)
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index d546567e0286..910415116995 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -865,6 +865,7 @@ struct i40e_vsi {
int num_q_vectors;
int base_vector;
bool irqs_ready;
+ bool legacy_msi_irq_ready;
u16 seid; /* HW index of this VSI (absolute index) */
u16 id; /* VSI number */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index cbcfada7b357..b39004a42df2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -4630,6 +4630,9 @@ static int i40e_vsi_request_irq(struct i40e_vsi *vsi, char *basename)
if (err)
dev_info(&pf->pdev->dev, "request_irq failed, Error %d\n", err);
+ if (!test_bit(I40E_FLAG_MSIX_ENA, pf->flags) && !err)
+ vsi->legacy_msi_irq_ready = true;
+
return err;
}
@@ -5061,6 +5064,10 @@ static void i40e_vsi_free_irq(struct i40e_vsi *vsi)
}
}
} else {
+ if (!vsi->legacy_msi_irq_ready)
+ return;
+
+ vsi->legacy_msi_irq_ready = false;
free_irq(pf->pdev->irq, pf);
val = rd32(hw, I40E_PFINT_LNKLST0);
@@ -11519,6 +11526,7 @@ static int i40e_vsi_mem_alloc(struct i40e_pf *pf, enum i40e_vsi_type type)
vsi->work_limit = I40E_DEFAULT_IRQ_WORK;
hash_init(vsi->mac_filter_hash);
vsi->irqs_ready = false;
+ vsi->legacy_msi_irq_ready = false;
if (type == I40E_VSI_MAIN) {
vsi->af_xdp_zc_qps = bitmap_zalloc(pf->num_lan_qps, GFP_KERNEL);
--
2.46.0
Powered by blists - more mailing lists