[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1380657915.22910.7.camel@jekeller-desk1.amr.corp.intel.com>
Date: Tue, 1 Oct 2013 20:05:15 +0000
From: "Keller, Jacob E" <jacob.e.keller@...el.com>
To: Yuval Mintz <yuvalmin@...adcom.com>
CC: Francois Romieu <romieu@...zoreil.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"Duyck, Alexander H" <alexander.h.duyck@...el.com>,
Hyong-Youb Kim <hykim@...i.com>,
Dmitry Kravkov <dmitry@...adcom.com>,
"Amir Vadai" <amirv@...lanox.com>,
Eliezer Tamir <eliezer.tamir@...ux.intel.com>
Subject: Re: [PATCH net RFC 2/2] ixgbe: fix sleep bug caused by napi_disable
inside local_bh_disable()d context
On Tue, 2013-10-01 at 12:11 +0000, Yuval Mintz wrote:
> > > > I have to move the local_bh_disable in order to put napi_disable
> > > outside
> > > > of the call since napi_disable could sleep, causing a scheduling while
> > > > atomic BUG.
> > >
> > > I am in violent agreement with this part.
> > > --
> > > Ueimor
> >
> > Regards,
> > Jake
> > --
>
> It seem like we've hit the same issue with the bnx2x driver.
> Is there anything new about the RFC?
>
> Thanks,
> Yuval
>
>
>
>
>
The napi_disable call for might_sleep() is the same. The solution in the
ixgbe driver is different. I completely re-wrote the segment about how
to disable the q_vector by adding a new state, rather than abusing the
QV_LOCKED_NAPI state. In addition I refactored it so that the
qv_lock_napi used spin_lock_bh() instead of plain spin_lock(), and
changed it so that we didn't need the local_bh_disable() call in
ixgbe_napi_disable_all.
This is a much cleaner solution than what I originally proposed.
Regards,
Jake
Powered by blists - more mailing lists