lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y72casXMXhedox8r@bombadil.infradead.org>
Date:   Tue, 10 Jan 2023 09:12:10 -0800
From:   Luis Chamberlain <mcgrof@...nel.org>
To:     Petr Mladek <pmladek@...e.com>
Cc:     Zhen Lei <thunder.leizhen@...wei.com>,
        linux-kernel@...r.kernel.org, linux-modules@...r.kernel.org,
        Anders Roxell <anders.roxell@...aro.org>
Subject: Re: [PATCH v2] kallsyms: Fix sleeping function called from invalid
 context when CONFIG_KALLSYMS_SELFTEST=y

On Tue, Jan 10, 2023 at 10:57:21AM +0100, Petr Mladek wrote:
> On Mon 2023-01-09 16:12:53, Luis Chamberlain wrote:
> > On Mon, Jan 09, 2023 at 02:40:27PM +0100, Petr Mladek wrote:
> > > Why are try hardly comparable?
> > > 
> > > 1. The speed depends on the number of loaded modules
> > >    and number of symbols. It highly depends on the configuration
> > >    that was used to build the kernel.
> > > 
> > > 2. The test runs only once. As a result it is hard to judge
> > >    how big is the noise.
> > > 
> > > 3. The noise might depend on the size and state of CPU caches.
> > > 
> > > 
> > > I personally vote for removing this selftest!
> > 
> > Even so, just as with testing a filesystem with different types of
> > configurations, at least testing a few configs helps and it's what
> > we do. Then, if anyone ever wanted to try to increase performance
> > on symbol lookup today they have no easy way to measure things. How
> > would they go about comparing things performance without this selftest?
> 
> How many people cares about kallsyms performance, please?
> Is it worth spending time one implementing and maintaining such a
> selftest?

If someone is willing to *try* put the effort to do it as they are optimizing
it, then by all means it is welcomed effort.

> Yes, Zhen wanted to make it faster. But how likely will anyone else
> try to make it even better? Do we need to spend time on this
> in the meantime?

I can't say.

> > This selftests helps generically with that *and* helps peg on to it any sanity
> > checks you may wish to add to those APIs which we just don't want to do
> > upstream.
> 
> From my POV, it would be much more important to check if the API
> works as expected. I mean that it gives the right results.

Sure, but that's just one aspect of it. And before the selftests we
didn't have that either.

> I am not sure that performance is that important to spend more time
> on this one.
> 
> Also I am not sure if selftests are the right location for performance
> tests. My understanding is that it is a framework for functional
> testing. It is showing if the tests passed or not. But performance
> tests do not give "pass or not" results.

Sefltests have no rules, you do what you want, for your own use. It
is up to you for your own subsystem.

But I do agree that if we want a performance data, it should be reliable
and so if that cannot be done today then best just remove it until it
can be done.

  Luis

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ