[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251024204746.3092277-4-anthony.l.nguyen@intel.com>
Date: Fri, 24 Oct 2025 13:47:38 -0700
From: Tony Nguyen <anthony.l.nguyen@...el.com>
To: davem@...emloft.net,
kuba@...nel.org,
pabeni@...hat.com,
edumazet@...gle.com,
andrew+netdev@...n.ch,
netdev@...r.kernel.org
Cc: Przemek Kitszel <przemyslaw.kitszel@...el.com>,
anthony.l.nguyen@...el.com,
jacob.e.keller@...el.com,
mschmidt@...hat.com,
poros@...hat.com,
horms@...nel.org,
Aleksandr Loktionov <aleksandr.loktionov@...el.com>,
Rinitha S <sx.rinitha@...el.com>
Subject: [PATCH net-next 3/9] ice: move ice_init_interrupt_scheme() prior ice_init_pf()
From: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Move ice_init_interrupt_scheme() prior ice_init_pf().
To enable the move ice_set_pf_caps() was moved out from ice_init_pf()
to the caller (ice_init_dev()), and placed prior to the irq scheme init.
The move makes deinit order of ice_deinit_dev() and failure-path of
ice_init_pf() match (at least in terms of not calling
ice_clear_interrupt_scheme() and ice_deinit_pf() in opposite ways).
The new order aligns with findings made by Jakub Buchocki in
the commit 24b454bc354a ("ice: Fix ice module unload").
Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@...el.com>
Tested-by: Rinitha S <sx.rinitha@...el.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
---
drivers/net/ethernet/intel/ice/ice_main.c | 25 ++++++++++-------------
1 file changed, 11 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index f9e464b79bca..e00c282a8c18 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -4043,8 +4043,6 @@ void ice_start_service_task(struct ice_pf *pf)
*/
static int ice_init_pf(struct ice_pf *pf)
{
- ice_set_pf_caps(pf);
-
mutex_init(&pf->sw_mutex);
mutex_init(&pf->tc_mutex);
mutex_init(&pf->adev_mutex);
@@ -4746,11 +4744,18 @@ int ice_init_dev(struct ice_pf *pf)
ice_set_safe_mode_caps(hw);
}
+ ice_set_pf_caps(pf);
+ err = ice_init_interrupt_scheme(pf);
+ if (err) {
+ dev_err(dev, "ice_init_interrupt_scheme failed: %d\n", err);
+ return -EIO;
+ }
+
ice_start_service_task(pf);
err = ice_init_pf(pf);
if (err) {
dev_err(dev, "ice_init_pf failed: %d\n", err);
- return err;
+ goto unroll_irq_scheme_init;
}
pf->hw.udp_tunnel_nic.set_port = ice_udp_tunnel_set_port;
@@ -4768,14 +4773,6 @@ int ice_init_dev(struct ice_pf *pf)
pf->hw.udp_tunnel_nic.tables[1].tunnel_types =
UDP_TUNNEL_TYPE_GENEVE;
}
-
- err = ice_init_interrupt_scheme(pf);
- if (err) {
- dev_err(dev, "ice_init_interrupt_scheme failed: %d\n", err);
- err = -EIO;
- goto unroll_pf_init;
- }
-
/* In case of MSIX we are going to setup the misc vector right here
* to handle admin queue events etc. In case of legacy and MSI
* the misc functionality and queue processing is combined in
@@ -4784,16 +4781,16 @@ int ice_init_dev(struct ice_pf *pf)
err = ice_req_irq_msix_misc(pf);
if (err) {
dev_err(dev, "setup of misc vector failed: %d\n", err);
- goto unroll_irq_scheme_init;
+ goto unroll_pf_init;
}
return 0;
-unroll_irq_scheme_init:
- ice_clear_interrupt_scheme(pf);
unroll_pf_init:
ice_deinit_pf(pf);
+unroll_irq_scheme_init:
ice_service_task_stop(pf);
+ ice_clear_interrupt_scheme(pf);
return err;
}
--
2.47.1
Powered by blists - more mailing lists