lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 25 Jul 2018 21:06:10 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Yafang Shao <laoar.shao@...il.com>,
        Eric Dumazet <eric.dumazet@...il.com>
Cc:     David Miller <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next] tcp: add SNMP counter for the number of packets
 pruned from ofo queue



On 07/25/2018 06:42 PM, Yafang Shao wrote:
 
> 
> Hi Eric,
> 
> LINUX_MIB_TCPOFOQUEUE, LINUX_MIB_TCPOFODROP and LINUX_MIB_TCPOFOMERGE
> are all for the number of SKBs, but only  LINUX_MIB_OFOPRUNED is for
> the event, that could lead misunderstading.
> So I think introducing a counter for the number of SKB pruned could be
> better, that could help us to track the whole behavior of ofo queue.
> That is why I submit this patch.

Sure, and I said your patch had issues.
You mixed 'packets' and 'skbs' but apparently you do not get my point.

I would rather not add another SNMP counter, and refine the current one,
trying to track something more meaningful.

The notion of 'skb' is internal to the kernel, and can not be mapped easily
to 'number of network segments' which probably is more useful for the user.

I will send this instead :

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index d51fa358b2b196d0f9c258b24354813f2128a675..141a062abd0660c8f6d049de1dc7c7ecf7a7230d 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -5001,18 +5001,19 @@ static bool tcp_prune_ofo_queue(struct sock *sk)
 {
        struct tcp_sock *tp = tcp_sk(sk);
        struct rb_node *node, *prev;
+       unsigned int segs = 0;
        int goal;
 
        if (RB_EMPTY_ROOT(&tp->out_of_order_queue))
                return false;
 
-       NET_INC_STATS(sock_net(sk), LINUX_MIB_OFOPRUNED);
        goal = sk->sk_rcvbuf >> 3;
        node = &tp->ooo_last_skb->rbnode;
        do {
                prev = rb_prev(node);
                rb_erase(node, &tp->out_of_order_queue);
                goal -= rb_to_skb(node)->truesize;
+               segs += max_t(u16, 1, skb_shinfo(rb_to_skb(node))->gso_segs);
                tcp_drop(sk, rb_to_skb(node));
                if (!prev || goal <= 0) {
                        sk_mem_reclaim(sk);
@@ -5023,6 +5024,7 @@ static bool tcp_prune_ofo_queue(struct sock *sk)
                }
                node = prev;
        } while (node);
+       NET_ADD_STATS(sock_net(sk), LINUX_MIB_OFOPRUNED, segs);
        tp->ooo_last_skb = rb_to_skb(prev);
 
        /* Reset SACK state.  A conforming SACK implementation will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ