[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190213135534.01dacee5@shemminger-XPS-13-9360>
Date: Wed, 13 Feb 2019 13:55:34 -0800
From: Stephen Hemminger <stephen@...workplumber.org>
To: Stefano Brivio <sbrivio@...hat.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org,
Sabrina Dubroca <sd@...asysnail.net>,
David Ahern <dsahern@...il.com>
Subject: Re: [PATCH iproute2 net-next v2 3/4] ss: Buffer raw fields first,
then render them as a table
On Wed, 13 Feb 2019 22:17:16 +0100
Stefano Brivio <sbrivio@...hat.com> wrote:
> On Wed, 13 Feb 2019 09:31:03 -0800
> Eric Dumazet <eric.dumazet@...il.com> wrote:
>
> > On 02/13/2019 12:37 AM, Stefano Brivio wrote:
> > > On Tue, 12 Feb 2019 16:42:04 -0800
> > > Eric Dumazet <eric.dumazet@...il.com> wrote:
> > >
> > >> I do not get it.
> > >>
> > >> "ss -emoi " uses almost 1KB per socket.
> > >>
> > >> 10,000,000 sockets -> we need about 10GB of memory ???
> > >>
> > >> This is a serious regression.
> > >
> > > I guess this is rather subjective: the worst case I considered back then
> > > was the output of 'ss -tei0' (less than 500 bytes) for one million
> > > sockets, which gives 500M of memory, which should in turn be fine on a
> > > machine handling one million sockets.
> > >
> > > Now, if 'ss -emoi' on 10 million sockets is an actual use case (out of
> > > curiosity: how are you going to process that output? Would JSON help?),
> > > I see two easy options to solve this:
> >
> >
> > ss -temoi | parser (written in shell or awk or whatever...)
> >
> > This is a use case, I just got bitten because using ss command
> > actually OOM my container, while trying to debug a busy GFE.
> >
> > The host itself can have 10,000,000 TCP sockets, but usually sysadmin shells
> > run in a container with no more than 500 MB available.
> >
> > Otherwise, it would be too easy for a buggy program to OOM the whole machine
> > and have angry customers.
> >
> > >
> > > 1. flush the output every time we reach a given buffer size (1M
> > > perhaps). This might make the resulting blocks slightly unaligned,
> > > with occasional loss of readability on lines occurring every 1k to
> > > 10k sockets approximately, even though after 1k sockets column sizes
> > > won't change much (it looks anyway better than the original), and I
> > > don't expect anybody to actually scroll that output
> > >
> > > 2. add a switch for unbuffered output, but then you need to remember to
> > > pass it manually, and the whole output would be as bad as the
> > > original in case you need the switch.
> > >
> > > I'd rather go with 1., it's easy to implement (we already have partial
> > > flushing with '--events') and it looks like a good compromise on
> > > usability. Thoughts?
> > >
> >
> > 1 seems fine, but a switch for 'please do not try to format' would be fine.
> >
> > I wonder why we try to 'format' when stdout is a pipe or a regular file .
>
> On a second thought: what about | less, or | grep [ports],
> or > readable.log? I guess those might also be rather common use cases,
> what do you think?
>
> I'm tempted to skip this for the moment and just go with option 1.
>
What I would favor:
* use big enough columns that for the common case everything lines up fine
* if column is to wide just print that element wider (which is what print %Ns does)
and
* add json output for programs that want to parse
* use print_uint etc for that
The buffering patch (in iproute2-next) can/will be reverted.
Powered by blists - more mailing lists