lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Jul 2014 13:51:58 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Denys Fedoryshchenko <nuclearcat@...learcat.com>
Cc:	netdev@...r.kernel.org, kaber@...sh.net, davem@...emloft.net
Subject: Re: /proc/net/sockstat invalid memory accounting or memory leak in
 latest kernels?

On Thu, 2014-07-17 at 13:52 +0300, Denys Fedoryshchenko wrote:
> Hi
> 
> I noticed TCP transfer rate slowdown after few days of operation on 
> kernel 3.15.3, after some digging found out this:

What was previous version you were using without this problem ?

> 
> balancer-backup ~ # cat /proc/net/sockstat
> sockets: used 118236
> TCP: inuse 122958 orphan 4986 tw 108010 alloc 123179 mem 1955339
> UDP: inuse 1 mem 0
> UDPLITE: inuse 0
> RAW: inuse 0
> FRAG: inuse 1 memory 2
> 
> after shutting down program
> balancer-backup ~ # cat /proc/net/sockstat
> sockets: used 47
> TCP: inuse 10552 orphan 10547 tw 142645 alloc 10552 mem 1877061
> UDP: inuse 0 mem 0
> UDPLITE: inuse 0
> RAW: inuse 0
> FRAG: inuse 0 memory 0
> 
> sysctl settings:
> net.ipv4.tcp_mem = 1767103      2045612 3068412
> 
> I restarted recently process, and mem value didnt changed (while because 
> it is sockets should release all memory), also it looks incorrect, 
> because at same time:
> balancer-backup ~ # cat /proc/meminfo
> MemTotal:       32939492 kB
> MemFree:        29876564 kB
> 
> While 1955339 * 4096 should be around 8GB.
> Probably it is just accounting issue or is it real memory leak?
> What other info i can provide to troubleshoot this info more properly?
> I will upgrade to 3.15.5 also now, to see if issue persist there.

I see nothing really wrong in your report.

It looks like you have a lot of sockets around after shutdown of the
program. Each FIN-WAIT socket might consume a lot of buffers in its
write queue, unless you use/force SO_LINGER or something.

If you try the following command, you might see how many sockets have
outstanding data.

ss -amn



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ