lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue,  9 Jun 2020 10:09:31 -0400
From:   Willem de Bruijn <willemdebruijn.kernel@...il.com>
To:     netdev@...r.kernel.org
Cc:     Willem de Bruijn <willemb@...gle.com>
Subject: [PATCH RFC net-next 3/6] net_sched: sch_fq: multiple release time support

From: Willem de Bruijn <willemb@...gle.com>

Optionally segment skbs on FQ enqueue, to later send segments at
their individual delivery time.

Segmentation on enqueue is new for FQ, but already happens in TBF,
CAKE and netem.

This slow patch should probably be behind a static_branch.

Signed-off-by: Willem de Bruijn <willemb@...gle.com>
---
 net/sched/sch_fq.c | 33 +++++++++++++++++++++++++++++++--
 1 file changed, 31 insertions(+), 2 deletions(-)

diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index 8f06a808c59a..a5e2c35bb557 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -439,8 +439,8 @@ static bool fq_packet_beyond_horizon(const struct sk_buff *skb,
 	return unlikely((s64)skb->tstamp > (s64)(q->ktime_cache + q->horizon));
 }
 
-static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
-		      struct sk_buff **to_free)
+static int __fq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+			struct sk_buff **to_free)
 {
 	struct fq_sched_data *q = qdisc_priv(sch);
 	struct fq_flow *f;
@@ -496,6 +496,35 @@ static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 	return NET_XMIT_SUCCESS;
 }
 
+static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+		      struct sk_buff **to_free)
+{
+	struct sk_buff *segs, *next;
+	int ret;
+
+	if (likely(!skb_is_gso(skb) || !skb->sk ||
+		   !skb->sk->sk_txtime_multi_release))
+		return __fq_enqueue(skb, sch, to_free);
+
+	segs = skb_gso_segment_txtime(skb);
+	if (IS_ERR(segs))
+		return qdisc_drop(skb, sch, to_free);
+	if (!segs)
+		return __fq_enqueue(skb, sch, to_free);
+
+	consume_skb(skb);
+
+	ret = NET_XMIT_DROP;
+	skb_list_walk_safe(segs, segs, next) {
+		skb_mark_not_on_list(segs);
+		qdisc_skb_cb(segs)->pkt_len = segs->len;
+		if (__fq_enqueue(segs, sch, to_free) == NET_XMIT_SUCCESS)
+			ret = NET_XMIT_SUCCESS;
+	}
+
+	return ret;
+}
+
 static void fq_check_throttled(struct fq_sched_data *q, u64 now)
 {
 	unsigned long sample;
-- 
2.27.0.278.ge193c7cf3a9-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ