lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 26 Jul 2008 15:18:38 +0200 From: Jarek Poplawski <jarkao2@...il.com> To: David Miller <davem@...emloft.net> Cc: johannes@...solutions.net, netdev@...eo.de, peterz@...radead.org, Larry.Finger@...inger.net, kaber@...sh.net, torvalds@...ux-foundation.org, akpm@...ux-foundation.org, netdev@...r.kernel.org, linux-kernel@...r.kernel.org, linux-wireless@...r.kernel.org, mingo@...hat.com Subject: Re: Kernel WARNING: at net/core/dev.c:1330 __netif_schedule+0x2c/0x98() On Sat, Jul 26, 2008 at 02:18:46AM -0700, David Miller wrote: ... > I think there might be an easier way, but we may have > to modify the state bits a little. > > Every call into ->hard_start_xmit() is made like this: > > 1. lock TX queue > 2. check TX queue stopped > 3. call ->hard_start_xmit() if not stopped > > This means that we can in fact do something like: > > unsigned int i; > > for (i = 0; i < dev->num_tx_queues; i++) { > struct netdev_queue *txq; > > txq = netdev_get_tx_queue(dev, i); > spin_lock_bh(&txq->_xmit_lock); > netif_tx_freeze_queue(txq); > spin_unlock_bh(&txq->_xmit_lock); > } > > netif_tx_freeze_queue() just sets a new bit we add. > > Then we go to the ->hard_start_xmit() call sites and check this new > "frozen" bit as well as the existing "stopped" bit. > > When we unfreeze each queue later, we see if it is stopped, and if not > we schedule it's qdisc for packet processing. I guess some additional synchronization will be added yet to prevent parallel freeze and especially unfreeze. Jarek P. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists