lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1439309530.1084.31.camel@edumazet-glaptop2.roam.corp.google.com>
Date:	Tue, 11 Aug 2015 09:12:10 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Jason Baron <jbaron@...mai.com>
Cc:	davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: [PATCH net-next v2] tcp: reduce cpu usage under tcp memory
 pressure when SO_SNDBUF is set

On Tue, 2015-08-11 at 11:03 -0400, Jason Baron wrote:

> 
> Yes, so the test case I'm using to test against is somewhat contrived.
> In that I am simply allocating around 40,000 sockets that are idle to
> create a 'permanent' memory pressure in the background. Then, I have
> just 1 flow that sets SO_SNDBUF, which results in the: poll(), write() loop.
> 
> That said, we encountered this issue initially where we had 10,000+
> flows and whenever the system would get into memory pressure, we would
> see all the cpus spin at 100%.
> 
> So the testcase I wrote, was just a simplistic version for testing. But
> I am going to try and test against the more realistic workload where
> this issue was initially observed.
> 

Note that I am still trying to understand why we need to increase socket
structure, for something which is inherently a problem of sharing memory
with an unknown (potentially big) number of sockets.

I suggested to use a flag (one bit).

If set, then we should fallback to tcp_wmem[0] (each socket has 4096
bytes, so that we can avoid starvation)



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ