lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4606.1360878661@death.nxdomain>
Date:	Thu, 14 Feb 2013 13:51:01 -0800
From:	Jay Vosburgh <fubar@...ibm.com>
To:	David Miller <davem@...emloft.net>
cc:	netdev@...r.kernel.org, eric.dumazet@...il.com, andy@...yhouse.net
Subject: Re: bonding inactive slaves vs rx_dropped

David Miller <davem@...emloft.net> wrote:

>People are starting to notice that rx_dropped now increments on every
>packet received on an bond's inactive slave.
>
>I'm actually fine with rx_dropped incrementing in this situation.
>
>The problem I want to address is that rx_dropped is encompassing
>several unrelated situations and thus has become less useful for
>diagnosis.
>
>I think we should add some new RX stats such that we can get at
>least a small amount of granularity for rx_dropped.
>
>This way team, bond, etc. can increment a new netdev_stats->rx_foo in
>this situation, and then someone doing diagnosis can see that
>rx_dropped and rx_foo are incrementing at similar rates.

	This drop isn't really happening in bonding, though.  From
looking at the code, it comes about because, for the inactive slave, the
rx_handler call returns EXACT, and there aren't any exact match ptype
bindings, so __netif_receive_skb throws it away.  This isn't always the
case; sometimes there is an exact match, for things like iSCSI or FCoE
that are really determined to get the packet.

	On a non-bonded interface, the same drop path (except for the
rx_handler) is taken for incoming packets for an unsupported protocol.

	We could probably add an, oh, rx_dropped_inactive, or some
variation on that theme, that is incremented at the end of
__netif_receive_skb if deliver_exact is set, e.g., something like:

diff --git a/net/core/dev.c b/net/core/dev.c
index a87bc74..4cd7c1f 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3559,6 +3559,8 @@ ncls:
 	} else {
 drop:
 		atomic_long_inc(&skb->dev->rx_dropped);
+		if (deliver_exact)
+			atomic_long_inc(&skb->dev->rx_dropped_inactive);
 		kfree_skb(skb);
 		/* Jamal, now you will not able to escape explaining
 		 * me how you were going to use this. :-)

	I think this would only hit frames that have been soft-rejected
(rx_handler says EXACT) by bonding or team, but were not subsequently
delivered to an exact match listener.

	There's the separate questions of whether there should be more
counters (e.g., drops in dev_skb_forward or enqueue_to_backlog), and how
to deliver the counter(s) to user space.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@...ibm.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ