lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73a1d5d0-8077-450c-a38f-c1b027088258@arm.com>
Date: Thu, 16 Oct 2025 18:17:35 +0100
From: Robin Murphy <robin.murphy@....com>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: Mostafa Saleh <smostafa@...gle.com>, iommu@...ts.linux.dev,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
 will@...nel.org, joro@...tes.org, praan@...gle.com
Subject: Re: [PATCH v5 4/4] iommu/io-pgtable-arm-selftests: Use KUnit

On 2025-10-15 4:10 pm, Jason Gunthorpe wrote:
> On Wed, Oct 15, 2025 at 02:51:15PM +0100, Robin Murphy wrote:
>> On 2025-10-15 10:53 am, Mostafa Saleh wrote:
>>>> Conversely, this is infrastructure, not an actual test of expected
>>>> io-pgtable behaviour, so I think just:
>>>>
>>>> 	cfg.iommu_dev = kunit_device_register(test, "io-pgtable-test");
>>>> 	if (IS_ERR(cfg.iommu_dev))
>>>> 		return;
>>>>
>>>> (it doesn't return NULLs either)
>>>>
>>>
>>> Yes, I was not sure about this one, when checking the code base, every test
>>> handles kunit_device_register() failure differently, this seemed the
>>> most strict one so I used it, I will update that in the next version.
>>
>> Yeah, my impression is that those have likely been copied from the
>> lib/kunit/ code where it is actually testing its own API. I've now sent a
>> patch for the example in the docs...
> 
> I think any failure to run the test should be reported, either with an
> err or a skip. Tests that didn't do anything and silently report
> success are not a good design.

Sure, a test that hasn't completed hasn't succeeded, but it hasn't 
failed either, in the sense of there being no fault found in the actual 
code being tested. Hence it seems intuitive to me that reporting a 
failure in the test infrastructure itself as if it were incorrect 
behaviour of the target code is not right.

In this case AFAICS kunit_device_register() can only fail due to OOM or 
something unexpectedly messed up in the kobject/sysfs hierarchy, all of 
which should already scream (and represent the system being sufficiently 
hosed that any actual test results probably no longer matter anyway) - 
otherwise I would have suggested a kunit_err() message too.

> Looking at the existing users I see alot are in init functions, so
> they propogate an error code and fail the init.

Indeed I think this one ultimately wants to end up in an init function 
too once everything is unpicked, but for now...
> And the rest inside tests do something like this:
> 
> 	dev = kunit_device_register(test, dev_name);			
> 	KUNIT_ASSERT_FALSE_MSG(test, IS_ERR(dev);
> 			       "Cannot register test device\n");

The confusion is that most of those KUNIT_ASSERTs are where the KUnit 
API *is* the thing under test, or have clearly copy-pasted the same 
example soon after, and some are in init functions which could fail 
gracefully, so it's far from clear how much considered intent they 
actually represent. In fact AFAICS it's only the two cases of that exact 
pattern in overflow and fortify that hint at being an older notion.

However, on further inspection, I see that there also about as many of 
examples of kunit_skip() being used when failing to allocate per-test 
resources, so I guess that's the pattern I was after, hurrah! I wouldn't 
consider the difference between "test skipped because it unexpectedly 
couldn't run" and "test skipped because it knowingly wouldn't have been 
meaningful or possible to run" significant enough to be worth trying to 
split further.

Thanks,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ