[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <82f1bc98-df6d-2b0a-17e5-fa057563284e@gmail.com>
Date: Tue, 12 Feb 2019 16:42:04 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Stefano Brivio <sbrivio@...hat.com>,
Stephen Hemminger <stephen@...workplumber.org>
Cc: netdev@...r.kernel.org, Sabrina Dubroca <sd@...asysnail.net>
Subject: Re: [PATCH iproute2 net-next v2 3/4] ss: Buffer raw fields first,
then render them as a table
On 12/11/2017 04:46 PM, Stefano Brivio wrote:
> This allows us to measure the maximum field length for each
> column before printing fields and will permit us to apply
> optimal field spacing and distribution. Structure of the output
> buffer with chunked allocation is described in comments.
>
> Output is still unchanged, original spacing is used.
>
> Running over one million sockets with -tul options by simply
> modifying main() to loop 50,000 times over the *_show()
> functions, buffering the whole output and rendering it at the
> end, with 10 UDP sockets, 10 TCP sockets, while throwing
> output away, doesn't show significant changes in execution time
> on my laptop with an Intel i7-6600U CPU:
>
> - before this patch:
> $ time ./ss -tul > /dev/null
> real 0m29.899s
> user 0m2.017s
> sys 0m27.801s
>
> - after this patch:
> $ time ./ss -tul > /dev/null
> real 0m29.827s
> user 0m1.942s
> sys 0m27.812s
>
I do not get it.
"ss -emoi " uses almost 1KB per socket.
10,000,000 sockets -> we need about 10GB of memory ???
This is a serious regression.
Powered by blists - more mailing lists