lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat,  5 Nov 2011 03:32:47 +0100
From:	Michal Soltys <soltys@....info>
To:	kaber@...sh.net
Cc:	davem@...emloft.net, netdev@...r.kernel.org
Subject: [PATCH 01/11] sch_hfsc.c: update_d() fixup

The deadline time is generated from total rt work + size of the next packet.

Now, when a packet gets dequeued by linkshare criterion, we have to
update the deadline time to match the new situation (that's the job of
update_d()) - but to do that properly, we have to subtract the size of
the packet just dequeued, when calling rtsc_min().

This is actually stated in hfsc paper very clearly, but got (probably)
missed during altq days (or due to some patch at some point).

Signed-off-by: Michal Soltys <soltys@....info>
---
 net/sched/sch_hfsc.c |   14 ++++++++------
 1 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
index 6488e64..261accc 100644
--- a/net/sched/sch_hfsc.c
+++ b/net/sched/sch_hfsc.c
@@ -650,9 +650,9 @@ update_ed(struct hfsc_class *cl, unsigned int next_len)
 }
 
 static inline void
-update_d(struct hfsc_class *cl, unsigned int next_len)
+update_d(struct hfsc_class *cl, unsigned int curr_len, unsigned int next_len)
 {
-	cl->cl_d = rtsc_y2x(&cl->cl_deadline, cl->cl_cumul + next_len);
+	cl->cl_d = rtsc_y2x(&cl->cl_deadline, cl->cl_cumul - curr_len + next_len);
 }
 
 static inline void
@@ -1610,7 +1610,7 @@ hfsc_dequeue(struct Qdisc *sch)
 	struct hfsc_class *cl;
 	struct sk_buff *skb;
 	u64 cur_time;
-	unsigned int next_len;
+	unsigned int curr_len, next_len;
 	int realtime = 0;
 
 	if (sch->q.qlen == 0)
@@ -1640,14 +1640,16 @@ hfsc_dequeue(struct Qdisc *sch)
 	}
 
 	skb = qdisc_dequeue_peeked(cl->qdisc);
+	curr_len = qdisc_pkt_len(skb);
+
 	if (skb == NULL) {
 		qdisc_warn_nonwc("HFSC", cl->qdisc);
 		return NULL;
 	}
 
-	update_vf(cl, qdisc_pkt_len(skb), cur_time);
+	update_vf(cl, curr_len, cur_time);
 	if (realtime)
-		cl->cl_cumul += qdisc_pkt_len(skb);
+		cl->cl_cumul += curr_len;
 
 	if (cl->qdisc->q.qlen != 0) {
 		if (cl->cl_flags & HFSC_RSC) {
@@ -1656,7 +1658,7 @@ hfsc_dequeue(struct Qdisc *sch)
 			if (realtime)
 				update_ed(cl, next_len);
 			else
-				update_d(cl, next_len);
+				update_d(cl, curr_len, next_len);
 		}
 	} else {
 		/* the class becomes passive */
-- 
1.7.7.1

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ