lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 14 May 2014 12:42:11 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Jon Maloy <maloy@...jonn.com>
Cc:	Jon Maloy <jon.maloy@...csson.com>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	Erik Hugne <erik.hugne@...csson.com>,
	"ying.xue@...driver.com" <ying.xue@...driver.com>,
	"tipc-discussion@...ts.sourceforge.net" 
	<tipc-discussion@...ts.sourceforge.net>
Subject: Re: [PATCH net-next v2 2/8] tipc: compensate for double accounting
 in socket rcv buffer

On Wed, 2014-05-14 at 15:00 -0400, Jon Maloy wrote:

> This is where I don't get it.
> 
> sk_add_backlog(limit) is (via sk_rcvqueues_full) testing for
> 
>  (sk_backlog.len + sk_rmem_alloc) > limit
> 
> But, if the receiving user is slow, sk_rmem_alloc will run full eventually, even if we
> reduce sk_backlog.len with truesize of each transferred buffer, and sk_add_backlog
> should then start throwing away packets. Why doesn't this happen?

It definitely can happen if sender tries to send small packets, that
have a high truesize/len ratio.

$ nstat -a | egrep "TcpExtTCPBacklogDrop|IpInReceives|TcpExtTCPRcvCoalesce"
IpInReceives                    8357544624         0.0
TcpExtTCPBacklogDrop            13                 0.0
TcpExtTCPRcvCoalesce            437826621          0.0

You claim that "that sk_backlog.len does not
give correct information about the buffer situation.", but really it
does.

Your problem seems that you do not use the appropriate 'limit',
or assume very tight scheduling constraints (An incoming packet has to
be immediately consumed by receiver, otherwise following packet might be
dropped)

If rcvbuf_limit(sk, buf) is the limit for normal packets (sk_rmem_alloc)
in receive queue, then you need something bigger to allow bursts.


diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index 3f9912f87d0d..fe4f37d8029a 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -1457,7 +1457,7 @@ u32 tipc_sk_rcv(struct sock *sk, struct sk_buff *buf)
 	if (!sock_owned_by_user(sk)) {
 		res = filter_rcv(sk, buf);
 	} else {
-		if (sk_add_backlog(sk, buf, rcvbuf_limit(sk, buf)))
+		if (sk_add_backlog(sk, buf, 2 * rcvbuf_limit(sk, buf)))
 			res = TIPC_ERR_OVERLOAD;
 		else
 			res = TIPC_OK;



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ