[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190821103947.kxb2mixecndcelb6@nokia-bell-labs.com>
Date: Wed, 21 Aug 2019 10:53:20 +0000
From: "Tilmans, Olivier (Nokia - BE/Antwerp)"
<olivier.tilmans@...ia-bell-labs.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
Stephen Hemminger <stephen@...workplumber.org>,
Olga Albisser <olga@...isser.org>,
"De Schepper, Koen (Nokia - BE/Antwerp)"
<koen.de_schepper@...ia-bell-labs.com>,
Bob Briscoe <research@...briscoe.net>,
Henrik Steen <henrist@...rist.net>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
"David S. Miller" <davem@...emloft.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next v4] sched: Add dualpi2 qdisc
> +static s64 __scale_delta(s64 diff)
> +{
> + do_div(diff, (1 << (ALPHA_BETA_GRANULARITY + 1)) - 1);
> + return diff;
> +}
[...]
> + delta = __scale_delta(((s64)qdelay - q->pi2.target) * q->pi2.alpha);
> + delta += __scale_delta(((s64)qdelay - qdelay_old) * q->pi2.beta);
I just noticed that ensuring 64b divide compatibility across platforms
using do_div() introduced this bug, as do_div() works with unsigned operands.
This will be fixed in a later v5 with the following patch:
---
net/sched/sch_dualpi2.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/net/sched/sch_dualpi2.c b/net/sched/sch_dualpi2.c
index a6452aa82018..c6c851499d35 100644
--- a/net/sched/sch_dualpi2.c
+++ b/net/sched/sch_dualpi2.c
@@ -385,7 +385,7 @@ static struct sk_buff *dualpi2_qdisc_dequeue(struct Qdisc *sch)
return skb;
}
-static s64 __scale_delta(s64 diff)
+static s64 __scale_delta(u64 diff)
{
do_div(diff, (1 << (ALPHA_BETA_GRANULARITY + 1)) - 1);
return diff;
@@ -406,16 +406,18 @@ static u32 calculate_probability(struct Qdisc *sch)
/* Alpha and beta take at most 32b, i.e, the delay difference would
* overflow for queueing delay differences > ~4.2sec.
*/
- delta = __scale_delta(((s64)qdelay - q->pi2.target) * q->pi2.alpha);
- delta += __scale_delta(((s64)qdelay - qdelay_old) * q->pi2.beta);
- new_prob = delta + q->pi2.prob;
+ delta = ((s64)qdelay - q->pi2.target) * q->pi2.alpha;
+ delta += ((s64)qdelay - qdelay_old) * q->pi2.beta;
/* Prevent overflow */
if (delta > 0) {
+ new_prob = __scale_delta(delta) + q->pi2.prob;
if (new_prob < q->pi2.prob)
new_prob = MAX_PROB;
+ } else {
+ new_prob = q->pi2.prob - __scale_delta(delta * -1);
/* Prevent underflow */
- } else if (new_prob > q->pi2.prob) {
- new_prob = 0;
+ if (new_prob > q->pi2.prob)
+ new_prob = 0;
}
/* If we do not drop on overload, ensure we cap the L4S probability to
* 100% to keep window fairness when overflowing.
--
Sorry for this.
Best,
Olivier
Powered by blists - more mailing lists