[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230504075709.GS525452@unreal>
Date: Thu, 4 May 2023 10:57:09 +0300
From: Leon Romanovsky <leon@...nel.org>
To: "Chittim, Madhu" <madhu.chittim@...el.com>
Cc: Ding Hui <dinghui@...gfor.com.cn>, davem@...emloft.net,
edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com,
intel-wired-lan@...ts.osuosl.org, jesse.brandeburg@...el.com,
anthony.l.nguyen@...el.com, keescook@...omium.org,
grzegorzx.szczurek@...el.com, mateusz.palczewski@...el.com,
mitch.a.williams@...el.com, gregory.v.rose@...el.com,
jeffrey.t.kirsher@...el.com, michal.kubiak@...el.com,
simon.horman@...igine.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-hardening@...r.kernel.org,
pengdonglin@...gfor.com.cn, huangcun@...gfor.com.cn
Subject: Re: [PATCH net v4 2/2] iavf: Fix out-of-bounds when setting channels
on remove
On Wed, May 03, 2023 at 12:22:00PM -0700, Chittim, Madhu wrote:
>
>
> On 5/3/2023 9:29 AM, Leon Romanovsky wrote:
> > On Wed, May 03, 2023 at 10:00:49PM +0800, Ding Hui wrote:
> > > On 2023/5/3 4:24 下午, Leon Romanovsky wrote:
> > > > On Wed, May 03, 2023 at 11:15:41AM +0800, Ding Hui wrote:
> > >
> > > > >
> > > > > If we detected removing is in processing, we can avoid unnecessary
> > > > > waiting and return error faster.
> > > > >
> > > > > On the other hand in timeout handling, we should keep the original
> > > > > num_active_queues and reset num_req_queues to 0.
> > > > >
> > > > > Fixes: 4e5e6b5d9d13 ("iavf: Fix return of set the new channel count")
> > > > > Signed-off-by: Ding Hui <dinghui@...gfor.com.cn>
> > > > > Cc: Donglin Peng <pengdonglin@...gfor.com.cn>
> > > > > Cc: Huang Cun <huangcun@...gfor.com.cn>
> > > > > Reviewed-by: Simon Horman <simon.horman@...igine.com>
> > > > > Reviewed-by: Michal Kubiak <michal.kubiak@...el.com>
> > > > > ---
> > > > > v3 to v4:
> > > > > - nothing changed
> > > > >
> > > > > v2 to v3:
> > > > > - fix review tag
> > > > >
> > > > > v1 to v2:
> > > > > - add reproduction script
> > > > >
> > > > > ---
> > > > > drivers/net/ethernet/intel/iavf/iavf_ethtool.c | 4 +++-
> > > > > 1 file changed, 3 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
> > > > > index 6f171d1d85b7..d8a3c0cfedd0 100644
> > > > > --- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
> > > > > +++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
> > > > > @@ -1857,13 +1857,15 @@ static int iavf_set_channels(struct net_device *netdev,
> > > > > /* wait for the reset is done */
> > > > > for (i = 0; i < IAVF_RESET_WAIT_COMPLETE_COUNT; i++) {
> > > > > msleep(IAVF_RESET_WAIT_MS);
> > > > > + if (test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))
> > > > > + return -EOPNOTSUPP;
> > > >
> > > > This makes no sense without locking as change to __IAVF_IN_REMOVE_TASK
> > > > can happen any time.
> > > >
> > >
> > > The state doesn't need to be that precise here, it is optimized only for
> > > the fast path. During the lifecycle of the adapter, the __IAVF_IN_REMOVE_TASK
> > > state will only be set and not cleared.
> > >
> > > If we didn't detect the "removing" state, we also can fallback to timeout
> > > handling.
> > >
> > > So I don't think the locking is necessary here, what do the maintainers
> > > at Intel think?
> >
> > I'm not Intel maintainer, but your change, explanation and the following
> > line from your commit message aren't really aligned.
> >
> > [ 3510.400799] ==================================================================
> > [ 3510.400820] BUG: KASAN: slab-out-of-bounds in iavf_free_all_tx_resources+0x156/0x160 [iavf]
> >
> >
>
> __IAVF_IN_REMOVE_TASK is being set only in iavf_remove() and the above
> change is ok in terms of coming out of setting channels early enough while
> remove is in progress.
It is not, __IAVF_IN_REMOVE_TASK, set bit can be changed any time during
iavf_set_channels() and if it is not, I would expect test_bit(..) placed
at the beginning of iavf_set_channels() or even earlier.
Thanks
Powered by blists - more mailing lists