[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090324154717.GA7506@elte.hu>
Date: Tue, 24 Mar 2009 16:47:17 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Robert Schwebel <r.schwebel@...gutronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Frank Blaschka <blaschka@...ux.vnet.ibm.com>,
"David S. Miller" <davem@...emloft.net>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
kernel@...gutronix.de
Subject: Re: Revert "gro: Fix legacy path napi_complete crash", (was: Re:
Linux 2.6.29)
* Ingo Molnar <mingo@...e.hu> wrote:
>
> * Herbert Xu <herbert@...dor.apana.org.au> wrote:
>
> > On Tue, Mar 24, 2009 at 03:39:42PM +0100, Ingo Molnar wrote:
> > >
> > > Subject: [PATCH] net: Fix netpoll lockup in legacy receive path
> >
> > Actually, this patch is still racy. If some interrupt comes in
> > and we suddenly get the maximum amount of backlog we can still
> > hang when we call __napi_complete incorrectly. It's unlikely
> > but we certainly shouldn't allow that. Here's a better version.
> >
> > net: Fix netpoll lockup in legacy receive path
>
> ok - i'm testing with this now.
test failure on one of the boxes, interface got stuck after ~100K
packets:
eth1 Link encap:Ethernet HWaddr 00:13:D4:DC:41:12
inet addr:10.0.1.13 Bcast:10.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::213:d4ff:fedc:4112/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:22555 errors:0 dropped:0 overruns:0 frame:0
TX packets:1897 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2435071 (2.3 MiB) TX bytes:503790 (491.9 KiB)
Interrupt:11 Base address:0x4000
i'm going back to your previous version for now - it might still be
racy but it worked well for about 1.5 hours of test-time.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists