[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d1c67c8-7dfe-905b-4548-dae23592edc5@web.de>
Date: Sat, 1 Jun 2019 10:36:40 +0200
From: Markus Elfring <Markus.Elfring@....de>
To: Josh Hunt <johunt@...mai.com>, David Ahern <dsahern@...il.com>,
Stephen Hemminger <stephen@...workplumber.org>,
netdev@...r.kernel.org
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: ss: Checking efficient analysis for network connections
> Multi-line output in ss makes it difficult to search for things with grep.
I became more concerned about efficient data processing for the
provided information. There are further software development possibilities
to consider, aren't there?
The chosen data formats influence the software situation considerably.
* Information is exported together with extra space characters.
Other field delimiters can occasionally be nicer.
* Regular expressions are useful to extract data from text lines.
But other programming interfaces are safer to work with data structures.
The mentioned program can be used to filter the provided input.
But if you would like to work with information from the filtered records,
you would need to repeat some data processing by the calling process
so that items will be converted into structured elements.
Would you like to avoid duplicate work?
I imagine then that it would be also nicer to perform filtering based on
configurable constraints at the data source directly.
How much can Linux help more in this software area?
How do you think about such ideas?
Regards,
Markus
Powered by blists - more mailing lists