lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1294165215.3579.133.camel@edumazet-laptop>
Date:	Tue, 04 Jan 2011 19:20:15 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Jesper Dangaard Brouer <hawk@...u.dk>
Cc:	Stephen Hemminger <shemminger@...tta.com>, hadi@...erus.ca,
	Jarek Poplawski <jarkao2@...il.com>,
	David Miller <davem@...emloft.net>,
	Patrick McHardy <kaber@...sh.net>,
	netdev <netdev@...r.kernel.org>
Subject: Re: [RFC] net_sched: mark packet staying on queue too long

Le mardi 04 janvier 2011 à 16:02 +0100, Eric Dumazet a écrit :

> I'd like to try kind of a SFQRED implementation, ie :
> 
> classify flows, then instead of using plain pfifo queues (currently done
> in SFQ), use N pseudo RED queues.
> 
> RED is a bit complex because it tries to make the probability estimation
> given queue backlog average. It has to use expensive time services (on
> some machines at least, if TSC not available)
> 
> My idea was to take into account the delay packets stay in its queue, so
> that no extra state is needed : Only take a timestamp when packet is
> enqueued, compute delta when dequeued, get 
> 
> Px = delta * Prob_per_time_unit;
> and drop/mark packet with Px probability.
> 
> Ram usage of SFQRED would be the same than SFQ, and cost roughly the
> same (because we could use jiffies based time sampling, (and HZ=1000 for
> a ms unit)).
> 
> 

Here is the POC patch I am currently testing, with a probability to
"early drop" a packet of one percent per ms (HZ=1000 here), only if
packet stayed at least 4 ms on queue.

Of course, this only apply where SFQ is used, with known SFQ limits :)

The term "early drop" is a lie. RED really early mark/drop a packet at
enqueue() time, while I do it at dequeue() time [since I need to compute
the delay]. But effect is the same on sent packets. This might use a bit
more memory, but no more than current SFQ [and only if flows dont react
to mark/drops]

insmod net/sched/sch_sfq.ko red_delay=4

By the way, I do think we should lower SFQ_DEPTH a bit and increase
SFQ_SLOTS by same amount. Allowing 127 packets per flow seems not
necessary in most situations SFQ might be used.

 net/sched/sch_sfq.c |   37 +++++++++++++++++++++++++++++++++----
 1 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
index d54ac94..4f958e3 100644
--- a/net/sched/sch_sfq.c
+++ b/net/sched/sch_sfq.c
@@ -24,6 +24,8 @@
 #include <net/ip.h>
 #include <net/netlink.h>
 #include <net/pkt_sched.h>
+#include <net/inet_ecn.h>
+#include <linux/moduleparam.h>
 
 
 /*	Stochastic Fairness Queuing algorithm.
@@ -86,6 +88,10 @@
 /* This type should contain at least SFQ_DEPTH + SFQ_SLOTS values */
 typedef unsigned char sfq_index;
 
+static int red_delay; /* default : no RED handling */
+module_param(red_delay, int, 0);
+MODULE_PARM_DESC(red_delay, "mark/drop packets if they stay in queue longer than red_delay ticks");
+
 /*
  * We dont use pointers to save space.
  * Small indexes [0 ... SFQ_SLOTS - 1] are 'pointers' to slots[] array
@@ -391,6 +397,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 
 	sch->qstats.backlog += qdisc_pkt_len(skb);
 	slot_queue_add(slot, skb);
+	qdisc_skb_cb(skb)->timestamp = jiffies;
 	sfq_inc(q, x);
 	if (slot->qlen == 1) {		/* The flow is new */
 		if (q->tail == NULL) {	/* It is the first flow */
@@ -402,11 +409,8 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 		q->tail = slot;
 		slot->allot = q->scaled_quantum;
 	}
-	if (++sch->q.qlen <= q->limit) {
-		sch->bstats.bytes += qdisc_pkt_len(skb);
-		sch->bstats.packets++;
+	if (++sch->q.qlen <= q->limit)
 		return NET_XMIT_SUCCESS;
-	}
 
 	sfq_drop(sch);
 	return NET_XMIT_CN;
@@ -432,6 +436,7 @@ sfq_dequeue(struct Qdisc *sch)
 	sfq_index a, next_a;
 	struct sfq_slot *slot;
 
+restart:
 	/* No active slots */
 	if (q->tail == NULL)
 		return NULL;
@@ -455,12 +460,36 @@ next_slot:
 		next_a = slot->next;
 		if (a == next_a) {
 			q->tail = NULL; /* no more active slots */
+			/* last packet queued, dont even try to apply RED */
 			return skb;
 		}
 		q->tail->next = next_a;
 	} else {
 		slot->allot -= SFQ_ALLOT_SIZE(qdisc_pkt_len(skb));
 	}
+	if (red_delay) {
+		long delay = jiffies - qdisc_skb_cb(skb)->timestamp;
+
+		if (delay >= red_delay) {
+			long Px = delay * (0xFFFFFF / 100); /* 1 percent per jiffy */
+			if ((net_random() & 0xFFFFFF) < Px) {
+				if (INET_ECN_set_ce(skb)) {
+					/* no ecnmark counter yet :) */
+					sch->qstats.overlimits++;
+				} else {
+					/* penalize this flow : we drop the 
+					 * packet while we changed slot->allot
+					 */
+					kfree_skb(skb);
+					/* no early_drop counter yet :) */
+					sch->qstats.drops++;
+					goto restart;
+				}
+			}
+		}
+	}
+	sch->bstats.bytes += qdisc_pkt_len(skb);
+	sch->bstats.packets++;
 	return skb;
 }
 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ