lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20231107004844.655549-3-anthony.l.nguyen@intel.com> Date: Mon, 6 Nov 2023 16:48:40 -0800 From: Tony Nguyen <anthony.l.nguyen@...el.com> To: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com, edumazet@...gle.com, netdev@...r.kernel.org Cc: Michal Schmidt <mschmidt@...hat.com>, anthony.l.nguyen@...el.com, daniel.machon@...rochip.com, Wojciech Drewek <wojciech.drewek@...el.com>, Pucha Himasekhar Reddy <himasekharx.reddy.pucha@...el.com>, Simon Horman <horms@...nel.org> Subject: [PATCH net 2/4] ice: lag: in RCU, use atomic allocation From: Michal Schmidt <mschmidt@...hat.com> Sleeping is not allowed in RCU read-side critical sections. Use atomic allocations under rcu_read_lock. Fixes: 1e0f9881ef79 ("ice: Flesh out implementation of support for SRIOV on bonded interface") Fixes: 41ccedf5ca8f ("ice: implement lag netdev event handler") Fixes: 3579aa86fb40 ("ice: update reset path for SRIOV LAG support") Signed-off-by: Michal Schmidt <mschmidt@...hat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@...el.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@...el.com> (A Contingent worker at Intel) Reviewed-by: Simon Horman <horms@...nel.org> Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com> --- drivers/net/ethernet/intel/ice/ice_lag.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c index 95e46bde54fe..cd065ec48c87 100644 --- a/drivers/net/ethernet/intel/ice/ice_lag.c +++ b/drivers/net/ethernet/intel/ice/ice_lag.c @@ -628,7 +628,7 @@ void ice_lag_move_new_vf_nodes(struct ice_vf *vf) INIT_LIST_HEAD(&ndlist.node); rcu_read_lock(); for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) { - nl = kzalloc(sizeof(*nl), GFP_KERNEL); + nl = kzalloc(sizeof(*nl), GFP_ATOMIC); if (!nl) break; @@ -1692,7 +1692,7 @@ ice_lag_event_handler(struct notifier_block *notif_blk, unsigned long event, rcu_read_lock(); for_each_netdev_in_bond_rcu(upper_netdev, tmp_nd) { - nd_list = kzalloc(sizeof(*nd_list), GFP_KERNEL); + nd_list = kzalloc(sizeof(*nd_list), GFP_ATOMIC); if (!nd_list) break; @@ -2069,7 +2069,7 @@ void ice_lag_rebuild(struct ice_pf *pf) INIT_LIST_HEAD(&ndlist.node); rcu_read_lock(); for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) { - nl = kzalloc(sizeof(*nl), GFP_KERNEL); + nl = kzalloc(sizeof(*nl), GFP_ATOMIC); if (!nl) break; -- 2.41.0
Powered by blists - more mailing lists