lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 10 Aug 2015 13:29:15 -0400
From:	Jason Baron <jbaron@...mai.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: [RFC PATCH net-next] tcp: reduce cpu usage under tcp memory pressure
 when SO_SNDBUF is set

On 08/10/2015 10:47 AM, Eric Dumazet wrote:
> On Fri, 2015-08-07 at 18:31 +0000, Jason Baron wrote:
>> From: Jason Baron <jbaron@...mai.com>
>>
>> When SO_SNDBUF is set and we are under tcp memory pressure, the effective write
>> buffer space can be much lower than what was set using SO_SNDBUF. For example,
>> we may have set the buffer to 100kb, but we may only be able to write 10kb. In
>> this scenario poll()/select()/epoll(), are going to continuously return POLLOUT,
>> followed by -EAGAIN from write() in a very tight loop.
>>
>> Introduce sk->sk_effective_sndbuf, such that we can track the 'effective' size
>> of the sndbuf, when we have a short write due to memory pressure. By using the
>> sk->sk_effective_sndbuf instead of the sk->sk_sndbuf when we are under memory
>> pressure, we can delay the POLLOUT until 1/3 of the buffer clears as we normally
>> do. There is no issue here when SO_SNDBUF is not set, since the tcp layer will
>> auto tune the sk->sndbuf.
>>
>> In my testing, this brought a single threaad's cpu usage down from 100% to 1%
>> while maintaining the same level of throughput when under memory pressure.
>>
> 
> I am not sure we need to grow socket for something that looks like a
> flag ?
>


So I added a new field because I needed to store the new 'effective'
sndbuf somewhere and then restore the original value that was set via
SO_SNDBUF. So its really b/c of SO_SNDBUF. We could perhaps use the fact
that we are in memory pressure to signal wakeups differently, but I'm
not sure exactly how.


> Also you add a race in sk_stream_wspace() as sk_effective_sndbuf value
> can change under us.
> 
> +       if (sk->sk_effective_sndbuf)
> +               return sk->sk_effective_sndbuf - sk->sk_wmem_queued;
> +
> 
> 
> 
> 

thanks. better?

--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -798,8 +798,10 @@ static inline int sk_stream_min_wspace(const struct
sock *sk)

 static inline int sk_stream_wspace(const struct sock *sk)
 {
-       if (sk->sk_effective_sndbuf)
-               return sk->sk_effective_sndbuf - sk->sk_wmem_queued;
+       int effective_sndbuf = sk->sk_effective_sndbuf;
+
+       if (effective_sndbuf)
+               return effective_sndbuf - sk->sk_wmem_queued;

        return sk->sk_sndbuf - sk->sk_wmem_queued;
 }


Thanks,

-Jason
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ