lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250313091714.115978b6@pumpkin>
Date: Thu, 13 Mar 2025 09:17:14 +0000
From: David Laight <david.laight.linux@...il.com>
To: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
 Arnd Bergmann <arnd@...db.de>, Linus Torvalds
 <torvalds@...ux-foundation.org>, Christophe Leroy
 <christophe.leroy@....fr>, Rasmus Villemoes <linux@...musvillemoes.dk>,
 nnac123@...ux.ibm.com, horms@...nel.org
Subject: Re: [PATCH next 2/8] test_hexdump: Create test output data from the
 binary input data buffer

On Wed, 12 Mar 2025 21:36:04 +0200
Andy Shevchenko <andriy.shevchenko@...ux.intel.com> wrote:

> On Wed, Mar 12, 2025 at 07:28:11PM +0000, David Laight wrote:
> > On Mon, 10 Mar 2025 10:31:10 +0200
> > Andy Shevchenko <andriy.shevchenko@...ux.intel.com> wrote:  
> > > On Sat, Mar 08, 2025 at 09:34:46AM +0000, David Laight wrote:  
> > > > Using the same data that is passed to hex_dump_to_buffer() lets
> > > > the tests be expanded to test different input data bytes.    
> > > 
> > > Do we really need to kill the static test cases?
> > > Are they anyhow harmful?  
> > 
> > I was asked to add some extra tests for other byte values.  
> 
> Right and thanks for doing that!
> 
> > The static result buffers just get in the way.
> > 
> > They are also not necessary since the tests are comparing the output
> > of two (hopefully) different implementations and checking they are
> > the same.  
> 
> Not necessary doesn't mean harmful or working wrong. I would leave them as is
> and just add a dynamic test cases on top. Static data is kinda randomised, but
> at the same time it's always the same through the test runs. IIRC your dynamic
> case generates the expected output and hence uses the code that also needs to
> be tested strictly speaking.
> 

The old data wasn't really randomised at all.
It tended to run a selected tests lots of times.
As an example there is no point randomly selecting between running the
'rowsize == 16' and 'rowsize == 32' tests several times when you can just
run each test once.

While that can make sense for some kinds of tests it is pointless here.
Much better to always run the same tests and to always cover the 'interesting'
cases - there are lots of very boring cases that all go through the same code
paths pretty much regardless of the implementation.
It is the corner cases that matter.

At one point I was testing all 'bufsize * len * rowsize * groupsize' tests.
Doesn't actually take long even though there are a lot of tests.
But there is just no point testing all the cases where the buffer is long
enough for the data. The boundary conditions are enough - and there are
actually under a dozen of them. The proposed tests do about 1000 just to
save working out which ones matter.

	David


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ