lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <651ade58-af15-3980-562f-dd3d461f2fcb@gmail.com>
Date:   Wed, 27 Feb 2019 07:53:23 -0800
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Sheng Lan <lansheng@...wei.com>,
        Eric Dumazet <eric.dumazet@...il.com>,
        Stephen Hemminger <stephen@...workplumber.org>
Cc:     davem@...emloft.net, netdev@...r.kernel.org,
        netem@...ts.linux-foundation.org, xuhanbing@...wei.com,
        zhengshaoyu@...wei.com, jiqin.ji@...wei.com,
        liuzhiqiang26@...wei.com, yuehaibing@...wei.com
Subject: Re: [PATCH] net: netem: fix skb length BUG_ON in __skb_to_sgvec



On 02/27/2019 03:26 AM, Sheng Lan wrote:
> 

> I traced again and found that the skb was not sent, master skb was still in write queue,
> because the function tcp_transmit_skb() returns 1 (NET_XMIT_DROP), thus it can be retransmit.
> I found the error value NET_XMIT_DROP returns from netem_enqueue(), when the length of qdisc queue
> is greater than queue limit value.
> 
> In netem_enqueue() the skb is cloned before returning the NET_XMIT_DROP error value,
> thus the master skb is still in write queue and be cloned in netem_enqueue(). This may cause the master
> skb be retransmit and fragmented again while it is cloned.
> 
> I think there are potential risks that tso_fragment() will get a cloned skb if skb is cloned by lower layer.
> I try to fix it by moving returning error value statment to the front of the skb_clone() in netem_enqueue(), and it works.
> And netem_enqueue() constructs corrupt packets statment returns NET_XMIT_DROP too. To fix this completely should I move the
> constructing corrupt statment to the front of the skb_clone() ?
> Please correct me if I am wrong, and I need your advice.
> 

Hi

Choices are either :

1) netem sends a proper return value if a packet has been queued.

2) TCP (and probably other protocols) no longer trust lower stack
   and always move the master skb in rtx queue,
   even if the transmit of the (first) clone failed.

I prefer 1), since netem is not used in the fast path, generally...

diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 75046ec7214449c631c38eaab5e4a51644cfa0e5..f6ea2d44dffe328a2fd1a468e209aac0bfaccd49 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -447,6 +447,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
        int nb = 0;
        int count = 1;
        int rc = NET_XMIT_SUCCESS;
+       int rc_drop = NET_XMIT_DROP;
 
        /* Do not fool qdisc_drop_all() */
        skb->prev = NULL;
@@ -486,6 +487,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
                q->duplicate = 0;
                rootq->enqueue(skb2, rootq, to_free);
                q->duplicate = dupsave;
+               rc_drop = NET_XMIT_SUCCESS;
        }
 
        /*
@@ -498,7 +500,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
                if (skb_is_gso(skb)) {
                        segs = netem_segment(skb, sch, to_free);
                        if (!segs)
-                               return NET_XMIT_DROP;
+                               return rc_drop;
                } else {
                        segs = skb;
                }
@@ -521,9 +523,10 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
                        1<<(prandom_u32() % 8);
        }
 
-       if (unlikely(sch->q.qlen >= sch->limit))
-               return qdisc_drop_all(skb, sch, to_free);
-
+       if (unlikely(sch->q.qlen >= sch->limit)) {
+               qdisc_drop_all(skb, sch, to_free);
+               return rc_drop;
+       }
        qdisc_qstats_backlog_inc(sch, skb);
 
        cb = netem_skb_cb(skb);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ