lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 15 Dec 2011 20:00:06 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Stephen Hemminger <shemminger@...tta.com>
Cc:	Rick Jones <rick.jones2@...com>,
	Vijay Subramanian <subramanian.vijay@...il.com>,
	tcpdump-workers@...ts.tcpdump.org, netdev@...r.kernel.org
Subject: Re: twice past the taps, thence out to net?

Le jeudi 15 décembre 2011 à 10:44 -0800, Stephen Hemminger a écrit :
> On Thu, 15 Dec 2011 10:32:56 -0800
> Rick Jones <rick.jones2@...com> wrote:
> 
> > 
> > > More exactly, we call dev_queue_xmit_nit() from dev_hard_start_xmit()
> > > _before_ giving skb to device driver.
> > >
> > > If device driver returns NETDEV_TX_BUSY, and a qdisc was setup on the
> > > device, packet is requeued.
> > >
> > > Later, when queue is allowed to send again packets, packet is
> > > retransmitted (and traced a second time in dev_queue_xmit_nit())
> > 
> > Is this then an unintended consequence bug, or a known feature?
> > 
> > rick
> > 
> > > You can see the 'requeues' counter from "tc -s -d qdisc" output :
> > >
> > > qdisc mq 0: dev eth2 root
> > >   Sent 29421597369 bytes 20301716 pkt (dropped 0, overlimits 0 requeues 371)
> > >   backlog 0b 0p requeues 371
> > 
> > Sure enough:
> > 
> > $ tc -s -d qdisc
> > qdisc mq 0: dev eth0 root
> >   Sent 2212158799862 bytes 1938268098 pkt (dropped 0, overlimits 0 
> > requeues 4975139)
> >   backlog 0b 0p requeues 4975139
> > 
> > rick jones
> 
> Device's work better if the driver proactively manages stop_queue/wake_queue.
> Old devices used TX_BUSY, but newer devices tend to manage the queue
> themselves.
> 

Some 'new' drivers like igb can be fooled in case skb is gso segmented ?

Because igb_xmit_frame_ring() needs skb_shinfo(skb)->nr_frags + 4
descriptors, igb should stop its queue not at MAX_SKB_FRAGS + 4, but
MAX_SKB_FRAGS*4

diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 89d576c..989da36 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -4370,7 +4370,7 @@ netdev_tx_t igb_xmit_frame_ring(struct sk_buff *skb,
 	igb_tx_map(tx_ring, first, hdr_len);
 
 	/* Make sure there is space in the ring for the next send. */
-	igb_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS + 4);
+	igb_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS * 4);
 
 	return NETDEV_TX_OK;


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists