lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1463720195.18194.267.camel@edumazet-glaptop3.roam.corp.google.com>
Date:	Thu, 19 May 2016 21:56:35 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	John Fastabend <john.fastabend@...il.com>
Cc:	Alexander Duyck <alexander.duyck@...il.com>,
	netdev <netdev@...r.kernel.org>,
	Alexander Duyck <aduyck@...antis.com>,
	Jesper Dangaard Brouer <brouer@...hat.com>,
	John Fastabend <john.r.fastabend@...el.com>
Subject: Re: [RFC] net: remove busylock

On Thu, 2016-05-19 at 21:49 -0700, John Fastabend wrote:

> I plan to start looking at this again in June when I have some
> more time FWIW. The last set of RFCs I sent out bypassed both the
> qdisc lock and the busy poll lock. I remember thinking this was a
> net win at the time but I only did very basic testing e.g. firing
> up n sessions of pktgen.
> 
> And because we are talking about cruft I always thought the gso_skb
> requeue logic could be done away with as well. As far as I can tell
> it must be there from some historic code that has been re-factored
> or deleted pre-git days. It would be much better I think (no data)
> to use byte queue limits or some other way to ensure the driver can
> consume the packet vs popping and pushing skbs around.

Problem is : byte queue limit can tell qdisc to send one packet, that
happens to be a GSO packet needing software segmentation.

(BQL does not know the size of the next packet to be dequeued from
qdisc)

Let say this GSO packet had 45 segs.

Then the driver has a limited TX ring space and only accepts 10 segs,
or simply BQL budget is consumed after 10 segs.

You need to requeue the remaining 35 segs.

So for example, following patch does not even help the requeue syndrom.

diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 269dd71b3828..a440c059fbcf 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -60,7 +60,7 @@ static void try_bulk_dequeue_skb(struct Qdisc *q,
 				 const struct netdev_queue *txq,
 				 int *packets)
 {
-	int bytelimit = qdisc_avail_bulklimit(txq) - skb->len;
+	int bytelimit = qdisc_avail_bulklimit(txq) - qdisc_skb_cb(skb)->pkt_len;
 
 	while (bytelimit > 0) {
 		struct sk_buff *nskb = q->dequeue(q);
@@ -68,7 +68,7 @@ static void try_bulk_dequeue_skb(struct Qdisc *q,
 		if (!nskb)
 			break;
 
-		bytelimit -= nskb->len; /* covers GSO len */
+		bytelimit -= qdisc_skb_cb(nskb)->pkt_len;
 		skb->next = nskb;
 		skb = nskb;
 		(*packets)++; /* GSO counts as one pkt */


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ