[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MW4PR11MB5776107A0FF563D50E011CE0FDD8A@MW4PR11MB5776.namprd11.prod.outlook.com>
Date: Mon, 23 Oct 2023 11:07:55 +0000
From: "Drewek, Wojciech" <wojciech.drewek@...el.com>
To: mschmidt <mschmidt@...hat.com>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>
CC: "Ertman, David M" <david.m.ertman@...el.com>, Daniel Machon
<daniel.machon@...rochip.com>, "Nguyen, Anthony L"
<anthony.l.nguyen@...el.com>, "Brandeburg, Jesse"
<jesse.brandeburg@...el.com>, "intel-wired-lan@...ts.osuosl.org"
<intel-wired-lan@...ts.osuosl.org>
Subject: RE: [PATCH net] ice: lag: in RCU, use atomic allocation
> -----Original Message-----
> From: Michal Schmidt <mschmidt@...hat.com>
> Sent: Monday, October 23, 2023 1:00 PM
> To: netdev@...r.kernel.org
> Cc: Ertman, David M <david.m.ertman@...el.com>; Daniel Machon <daniel.machon@...rochip.com>; Nguyen, Anthony L
> <anthony.l.nguyen@...el.com>; Brandeburg, Jesse <jesse.brandeburg@...el.com>; intel-wired-lan@...ts.osuosl.org
> Subject: [PATCH net] ice: lag: in RCU, use atomic allocation
>
> Sleeping is not allowed in RCU read-side critical sections.
> Use atomic allocations under rcu_read_lock.
>
> Fixes: 1e0f9881ef79 ("ice: Flesh out implementation of support for SRIOV on bonded interface")
> Fixes: 41ccedf5ca8f ("ice: implement lag netdev event handler")
> Fixes: 3579aa86fb40 ("ice: update reset path for SRIOV LAG support")
> Signed-off-by: Michal Schmidt <mschmidt@...hat.com>
Thanks Michal
Reviewed-by: Wojciech Drewek <wojciech.drewek@...el.com>
> ---
> drivers/net/ethernet/intel/ice/ice_lag.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c
> index 7b1256992dcf..33f01420eece 100644
> --- a/drivers/net/ethernet/intel/ice/ice_lag.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lag.c
> @@ -595,7 +595,7 @@ void ice_lag_move_new_vf_nodes(struct ice_vf *vf)
> INIT_LIST_HEAD(&ndlist.node);
> rcu_read_lock();
> for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) {
> - nl = kzalloc(sizeof(*nl), GFP_KERNEL);
> + nl = kzalloc(sizeof(*nl), GFP_ATOMIC);
> if (!nl)
> break;
>
> @@ -1672,7 +1672,7 @@ ice_lag_event_handler(struct notifier_block *notif_blk, unsigned long event,
>
> rcu_read_lock();
> for_each_netdev_in_bond_rcu(upper_netdev, tmp_nd) {
> - nd_list = kzalloc(sizeof(*nd_list), GFP_KERNEL);
> + nd_list = kzalloc(sizeof(*nd_list), GFP_ATOMIC);
> if (!nd_list)
> break;
>
> @@ -2046,7 +2046,7 @@ void ice_lag_rebuild(struct ice_pf *pf)
> INIT_LIST_HEAD(&ndlist.node);
> rcu_read_lock();
> for_each_netdev_in_bond_rcu(lag->upper_netdev, tmp_nd) {
> - nl = kzalloc(sizeof(*nl), GFP_KERNEL);
> + nl = kzalloc(sizeof(*nl), GFP_ATOMIC);
> if (!nl)
> break;
>
> --
> 2.41.0
>
Powered by blists - more mailing lists