lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20170608.112639.1539826730157752352.davem@davemloft.net>
Date:   Thu, 08 Jun 2017 11:26:39 -0400 (EDT)
From:   David Miller <davem@...emloft.net>
To:     eric.dumazet@...il.com
Cc:     netdev@...r.kernel.org
Subject: Re: [PATCH v2 net-next] tcp: add TCPMemoryPressuresChrono counter

From: Eric Dumazet <eric.dumazet@...il.com>
Date: Wed, 07 Jun 2017 13:29:12 -0700

> From: Eric Dumazet <edumazet@...gle.com>
> 
> DRAM supply shortage and poor memory pressure tracking in TCP
> stack makes any change in SO_SNDBUF/SO_RCVBUF (or equivalent autotuning
> limits) and tcp_mem[] quite hazardous.
> 
> TCPMemoryPressures SNMP counter is an indication of tcp_mem sysctl
> limits being hit, but only tracking number of transitions.
> 
> If TCP stack behavior under stress was perfect :
> 1) It would maintain memory usage close to the limit.
> 2) Memory pressure state would be entered for short times.
> 
> We certainly prefer 100 events lasting 10ms compared to one event
> lasting 200 seconds.
> 
> This patch adds a new SNMP counter tracking cumulative duration of
> memory pressure events, given in ms units.
> 
> $ cat /proc/sys/net/ipv4/tcp_mem
> 3088    4117    6176
> $ grep TCP /proc/net/sockstat
> TCP: inuse 180 orphan 0 tw 2 alloc 234 mem 4140
> $ nstat -n ; sleep 10 ; nstat |grep Pressure
> TcpExtTCPMemoryPressures        1700
> TcpExtTCPMemoryPressuresChrono  5209
> 
> v2: Used EXPORT_SYMBOL_GPL() instead of EXPORT_SYMBOL() as David
> instructed.
> 
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>

Applied, thanks Eric.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ