lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 26 Aug 2008 21:01:01 +0200
From:	Hans de Goede <>
To:	Eric Dumazet <>
CC:	Dave Jones <>,
Subject: Re: cat /proc/net/tcp takes 0.5 seconds on x86_64

Eric Dumazet wrote:
> Dave Jones a écrit :
>> Just had this bug reported against our development tree..
>>  > [hans@...alhost devel]$ time cat /proc/net/tcp
>>  > <snip>
>>  > real    0m0.520s
>>  > user    0m0.000s
>>  > sys     0m0.446s
>>  >  > Thats amazingly slow, esp as I only have 8 tcp connections open.
>>  >  > Some maybe usefull info: top reports a very high load (50%) from 
>> soft IRQ's.
>>  >  > Anyways changing this to a kernel bug.
> I wonder why this qualifies as a "kernel bug". This is a well known 
> problem.

No its not, /proc/net/tcp may be slow in general but not *this* slow ...


> Time difference between /proc/net/tcp and netlink on a 4GB x86_64 machine :
> # dmesg | grep "TCP established hash"
> TCP established hash table entries: 262144 (order: 10, 4194304 bytes)
> # time cat /proc/net/tcp >/dev/null
> real    0m0.091s
> user    0m0.001s
> sys     0m0.090s

As quoted above my idle x86_64, using the exact same hash table size, running 
2.6.27-rc2.git1 uses 0.520 seconds for that same command, thats a difference of 
more then a factor 50 !!

This is not about /proc/net/tcp not being fast, this is about it haven gotten 
slower by a factor of 50!

Also notice that this slowdown does not happen on i386.

Anyways I'll try 2.6.27-rc4 and report back with its results.


To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists