lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 12 Jul 2020 11:40:01 +1000
From:   Russell Strong <russell@...ong.id.au>
To:     Stephen Hemminger <stephen@...workplumber.org>
Cc:     netdev@...r.kernel.org
Subject: Re: amplifying qdisc

On Wed, 8 Jul 2020 23:26:34 -0700
Stephen Hemminger <stephen@...workplumber.org> wrote:

> On Thu, 9 Jul 2020 16:10:34 +1000
> Russell Strong <russell@...ong.id.au> wrote:
> 
> > Hi,
> > 
> > I'm attempting to fill a link with background traffic that is sent
> > whenever the link is idle.  To do this I've creates a qdisc that
> > will repeat the last packet in the queue for a defined number of
> > times (possibly infinite in the future). I am able to control the
> > contents of the fill traffic by sending the occasional packet
> > through this qdisc.
> > 
> > This is works as the root qdisc and below a TBF.  When I try it as a
> > leaf of HTB unexpected behaviour ensues.  I suspect my approach is
> > violating some rules for qdiscs?  Any help/ideas/pointers would be
> > appreciated.  
> 
> Netem can already do things like this. Why not add to that
> 

Hi,

Tried doing this within netem as follows; but run into similar
problems.  Works as the root qdisc (except for "Route cache is full:
consider increasing sysctl net.ipv[4|6].route.max_size.") but not under
htb.  I am attempting to duplicate at dequeue, rather than enqueue to
get an infinite stream of packets rather than a fixed number of
duplicates. Is this possible?

Thanks
Russell


diff --git a/sch_netem.c b/sch_netem.c
index 42e557d..9a674df 100644
--- a/sch_netem.c
+++ b/sch_netem.c
@@ -98,6 +98,7 @@ struct netem_sched_data {
        u32 cell_size;
        struct reciprocal_value cell_size_reciprocal;
        s32 cell_overhead;
+       u32 repeat_last;
 
        struct crndstate {
                u32 last;
@@ -697,9 +698,13 @@ deliver:
                        get_slot_next(q, now);
 
                if (time_to_send <= now && q->slot.slot_next <= now) {
-                       netem_erase_head(q, skb);
-                       sch->q.qlen--;
-                       qdisc_qstats_backlog_dec(sch, skb);
+                       if (sch->q.qlen == 1 && q->repeat_last)
+                               skb = skb_clone(skb, GFP_ATOMIC);
+                       else {
+                               netem_erase_head(q, skb);
+                               sch->q.qlen--;
+                               qdisc_qstats_backlog_dec(sch, skb);
+                       }
                        skb->next = NULL;
                        skb->prev = NULL;
                        /* skb->dev shares skb->rbnode area,
@@ -1061,6 +1066,7 @@ static int netem_init(struct Qdisc *sch, struct
nlattr *opt, return -EINVAL;
 
        q->loss_model = CLG_RANDOM;
+       q->repeat_last = 1;
        ret = netem_change(sch, opt, extack);
        if (ret)
                pr_info("netem: change failed\n");

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ