[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.2010130850410.14590@felia>
Date: Tue, 13 Oct 2020 09:16:27 +0200 (CEST)
From: Lukas Bulwahn <lukas.bulwahn@...il.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
cc: Lukas Bulwahn <lukas.bulwahn@...il.com>,
Alan Stern <stern@...land.harvard.edu>,
Sudip Mukherjee <sudipm.mukherjee@...il.com>,
linux-kernel@...r.kernel.org, linux-safety@...ts.elisa.tech,
linux-usb@...r.kernel.org
Subject: Re: [linux-safety] [PATCH] usb: host: ehci-sched: add comment about
find_tt() not returning error
On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:
> On Tue, Oct 13, 2020 at 07:37:34AM +0200, Lukas Bulwahn wrote:
> >
> >
> > On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:
> >
> > > On Mon, Oct 12, 2020 at 08:25:30PM +0200, Lukas Bulwahn wrote:
> > > >
> > > >
> > > > On Mon, 12 Oct 2020, Greg Kroah-Hartman wrote:
> > > >
> > > > > On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > > > > > And for the static analysis finding, we need to find a way to ignore this
> > > > > > finding without simply ignoring all findings or new findings that just
> > > > > > look very similar to the original finding, but which are valid.
> > > > >
> > > > > Then I suggest you fix the tool that "flagged" this, surely this is not
> > > > > the only thing it detected with a test like this, right?
> > > > >
> > > > > What tool reported this?
> > > > >
> > > >
> > > > Sudip and I are following on clang analyzer findings.
> > > >
> > > > On linux-next, there is new build target 'make clang-analyzer' that
> > > > outputs a bunch of warnings, just as you would expect from such static
> > > > analysis tools.
> > >
> > > Why not fix the things that it finds that are actually issues? If there
> > > are no actual issues found, then perhaps you should use a better tool? :)
> > >
> >
> > Completely agree. That is why I was against adding comments here and
> > elsewhere just to have the "good feeling of doing something" after the
> > tool reported a warning and we spend some time understanding the code to
> > conclude that we now understand the code better than the tool.
> >
> > If you know a better tool, we will use it :) unfortunately, there is no
> > easy way of finding out that a tool just reports false positives and not a
> > single true positive among 1000 reports...
>
> Who is "forcing" you to use any tool? What is your goal here?
>
No force involved.
For some of us, it is 'just for fun' and the interest to understand the
capabilities of those existing static analysis tools. To understand their
capabilities and limits, we simply go through those warnings and try to
reason if they are true positives (and deserve a patch) or false positives
(which we at least try to reasonably document for later statistics and
learning on systematic tool weaknesses).
Some others actually believe that the use of static analysis tools
increase software quality and ONLY IF a static analysis tool is used, a
specific level of software quality is achieved and they want to prove
that the software reaches a certain level that way. (I do not
understand that argument but some have been repeating it quite often
around me. This argument seems to come from a specific interpretation of
safety standards that claim to have methods to predict the absense of
bugs up to a certain confidence.)
I am doing it for the fun and learning about tools, and I am not such a
believer but those others would be forced by their beliefs until they
understand what static analysis tools and their janitors really already
contribute to the kernel development and where the real gaps might be.
I hope that helps to get a bit of the motivation. Consider us
kernel newbies :)
Lukas
Powered by blists - more mailing lists