lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5373BD5C.9030703@donjonn.com>
Date:	Wed, 14 May 2014 15:00:44 -0400
From:	Jon Maloy <maloy@...jonn.com>
To:	Eric Dumazet <eric.dumazet@...il.com>,
	Jon Maloy <jon.maloy@...csson.com>
CC:	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	Erik Hugne <erik.hugne@...csson.com>,
	"ying.xue@...driver.com" <ying.xue@...driver.com>,
	"tipc-discussion@...ts.sourceforge.net" 
	<tipc-discussion@...ts.sourceforge.net>
Subject: Re: [PATCH net-next v2 2/8] tipc: compensate for double accounting
 in socket rcv buffer

On 05/14/2014 01:45 PM, Eric Dumazet wrote:
> On Wed, 2014-05-14 at 12:53 +0000, Jon Maloy wrote:
>
>> For us, the underlying problem is that sk_backlog.len does not
>> give correct information about the buffer situation.  There is a comment
>> in  _release_sock() trying to explain why.:
>>
>> /*
>>   * Doing the zeroing here guarantee we can not loop forever
>>   * while a wild producer attempts to flood us.
>>   */
>>
>> but I fail to understand how this scenario can happen even with TCP.
>> Yes, it can throw away packets, but not until the receive buffer is full,
>> and then sk_add_backlog() should start rejecting new messages anyway?
>> There is evidently something I have missed here.
> The following can happen :
>
> An innocent user thread does a socket system call.
> It owns the socket.
>
> Then a flood of incoming messages happen, constantly trying to push new
> packets for this socket. Note the packets can be spoofed ones.
>
> Softirq handler notices socket is owned by 'user', so queue packets into
> backlog, unless the sk_rcvbuf limit is hit.
>
> If we were releasing sk_backlog.len for every dequeued skb, we could
> have a deadlock for the innocent user thread, who could never exit from
> __release_sock()
>
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8eae939f1400326b06d0c9afe53d2a484a326871

This is where I don't get it.

sk_add_backlog(limit) is (via sk_rcvqueues_full) testing for

 (sk_backlog.len + sk_rmem_alloc) > limit

But, if the receiving user is slow, sk_rmem_alloc will run full eventually, even if we
reduce sk_backlog.len with truesize of each transferred buffer, and sk_add_backlog
should then start throwing away packets. Why doesn't this happen?

I understand that the receiving user will be kept too busy to be able to do
any useful work, but this shouldn't result in the whole system running out
of memory, as is stated in the link. I am still confused.

///jon

> So really you do not want to 'relax' this check.

I am ok with that as long as we can work around it.

Regards
///jon

>
> All you need to do is have a big enough sk_rcvbuf for the expected and
> reasonable amount of memory you allow to be stored in input queues for
> your socket.
>
>
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ