[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1216891625.7257.261.camel@twins>
Date: Thu, 24 Jul 2008 11:27:05 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: David Miller <davem@...emloft.net>
Cc: jarkao2@...il.com, Larry.Finger@...inger.net, kaber@...sh.net,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-wireless@...r.kernel.org, mingo@...hat.com,
nickpiggin@...oo.com.au, paulmck@...ux.vnet.ibm.com
Subject: Re: Kernel WARNING: at net/core/dev.c:1330
__netif_schedule+0x2c/0x98()
On Thu, 2008-07-24 at 02:20 -0700, David Miller wrote:
> From: Peter Zijlstra <peterz@...radead.org>
> Date: Thu, 24 Jul 2008 11:10:48 +0200
>
> > Ok, then how about something like this, the idea is to wrap the per tx
> > lock with a read lock of the device and let the netif_tx_lock() be the
> > write side, therefore excluding all device locks, but not incure the
> > cacheline bouncing on the read side by using per-cpu counters like rcu
> > does.
> >
> > This of course requires that netif_tx_lock() is rare, otherwise stuff
> > will go bounce anyway...
> >
> > Probably missed a few details,.. but I think the below ought to show the
> > idea...
>
> Thanks for the effort, but I don't think we can seriously consider
> this.
>
> This lock is taken for every packet transmitted by the system, adding
> another memory reference (the RCU deref) and a counter bump is just
> not something we can just add to placate lockdep. We going through
> all of this effort to seperate the TX locking into individual
> queues, it would be silly to go back and make it more expensive.
Well, not only lockdep, taking a very large number of locks is expensive
as well.
> I have other ideas which I've expanded upon in other emails. They
> involve creating a netif_tx_freeze() interface and getting the drivers
> to start using it.
OK, as long as we get there :-)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists