lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241115211523.GB599524.vipinsh@google.com>
Date: Fri, 15 Nov 2024 13:15:23 -0800
From: Vipin Sharma <vipinsh@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Andrew Jones <ajones@...tanamicro.com>, kvm@...r.kernel.org,
	kvmarm@...ts.linux.dev, kvm-riscv@...ts.infradead.org,
	linux-arm-kernel@...ts.infradead.org,
	Paolo Bonzini <pbonzini@...hat.com>,
	Anup Patel <anup@...infault.org>,
	Christian Borntraeger <borntraeger@...ux.ibm.com>,
	Janosch Frank <frankja@...ux.ibm.com>,
	Claudio Imbrenda <imbrenda@...ux.ibm.com>,
	Marc Zyngier <maz@...nel.org>,
	Oliver Upton <oliver.upton@...ux.dev>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/1] KVM selftests runner for running more than just
 default

On 2024-11-14 09:42:32, Sean Christopherson wrote:
> On Fri, Nov 08, 2024, Andrew Jones wrote:
> > On Wed, Nov 06, 2024 at 09:06:39AM -0800, Sean Christopherson wrote:
> > > On Fri, Nov 01, 2024, Vipin Sharma wrote:
> > > > Phase 3: Provide collection of interesting configurations
> > > > 
> > > > Specific individual constructs can be combined in a meaningful way to
> > > > provide interesting configurations to run on a platform. For example,
> > > > user doesn't need to specify each individual configuration instead,
> > > > some prebuilt configurations can be exposed like
> > > > --stress_test_shadow_mmu, --test_basic_nested
> > > 
> > > IMO, this shouldn't be baked into the runner, i.e. should not surface as dedicated
> > > command line options.  Users shouldn't need to modify the runner just to bring
> > > their own configuration.  I also think configurations should be discoverable,
> > > e.g. not hardcoded like KUT's unittest.cfg.  A very real problem with KUT's
> > > approach is that testing different combinations is frustratingly difficult,
> > > because running a testcase with different configuration requires modifying a file
> > > that is tracked by git.

I was thinking of folks who send upstream patches, they might not have
interesting configurations to run to test. If we don't provide an option
then they might not be able to test different scenarios.

I do agree command line option might not be a great choice here, we
should keep them granular.

What if there is a shell script which has some runner commands with
different combinations? There should be a default configuration provided
to ease testing of patches for folks who might not be aware of the
configurations which maintainers generally use.

End goal is to provide good confidence to the patch submitter that they have
done good testing.

> > 
> > We have support in KUT for environment variables (which are stored in an
> > initrd). The feature hasn't been used too much, but x86 applies it to
> > configuration parameters needed to execute tests from grub, arm uses it
> > for an errata framework allowing tests to run on kernels which may not
> > include fixes to host-crashing bugs, and riscv is using them quite a bit
> > for providing test parameters and test expected results in order to allow
> > SBI tests to be run on a variety of SBI implementations. The environment
> > variables are provided in a text file which is not tracked by git. kvm
> > selftests can obviously also use environment variables by simply sourcing
> > them first in wrapper scripts for the tests.
> 
> Oh hell no! :-)
> 
> For reproducibility, transparency, determinism, environment variables are pure
> evil.  I don't want to discover that I wasn't actually testing what I thought I
> was testing because I forgot to set/purge an environment variable.  Ditto for
> trying to reproduce a failure reported by someone.
> 
> KUT's usage to adjust to the *system* environment is somewhat understandable
> But for KVM selftests, there should be absolutely zero reason to need to fall
> back to environment variables.  Unlike KUT, which can run in a fairly large variety
> of environments, e.g. bare metal vs. virtual, different VMMs, different firmware,
> etc., KVM selftests effectively support exactly one environment.
> 
> And unlike KUT, KVM selftests are tightly coupled to the kernel.  Yes, it's very
> possible to run selftests against different kernels, but I don't think we should
> go out of our way to support such usage.  And if an environment needs to skip a
> test, it should be super easy to do so if we decouple the test configuration
> inputs from the test runner.

Also, keeping things out of tree won't help other developers much. I want
majority of that configurations which maintainers/regular contributors
maintain locally to be upstreamed and consolidated.

> 
> > > There are underlying issues with KUT that essentially necessitate that approach,
> > > e.g. x86 has several testcases that fail if run without the exact right config.
> > > But that's just another reason to NOT follow KUT's pattern, e.g. to force us to
> > > write robust tests.
> > > 
> > > E.g. instead of per-config command line options, let the user specify a file,
> > > and/or a directory (using a well known filename pattern to detect configs).
> > 
> > Could also use an environment variable to specify a file which contains
> > a config in a test-specific format if parsing environment variables is
> > insufficient or awkward for configuring a test.
> 
> There's no reason to use a environment variable for this.  If we want to support
> "advanced" setup via a test configuration, then that can simply go in configuration
> file that's passed to the runner.

Can you guys specify What does this test configuration file/directory
will look like? Also, is it gonna be a one file for one test? This might
become ugly soon.

This brings the question on how to handle the test execution when we are using
different command line parameters for individual tests which need some
specific environmnet?

Some parameters will need a very specific module or sysfs setting which
might conflict with other tests. This is why I had "test_suite" in my
json, which can provide some module, sysfs, or other host settings. But
this also added cost of duplicating tests for each/few suites.

I guess the shell script I talked about few paragraphs above, can have
some specific runner invocations which will set specific requirements of
the test and execute that specific test (RFC runner has the capabilty to execute
specific test).

Open to suggestions on a better approach.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ