lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 4 Apr 2023 14:48:47 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Andrea Arcangeli <aarcange@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Nadav Amit <nadav.amit@...il.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Axel Rasmussen <axelrasmussen@...gle.com>,
        Leonardo Bras Soares Passos <lsoaresp@...hat.com>,
        Mike Rapoport <rppt@...ux.vnet.ibm.com>
Subject: Re: [PATCH 16/29] selftests/mm: UFFDIO_API test

On 03.04.23 22:24, Peter Xu wrote:
> On Mon, Apr 03, 2023 at 09:06:26PM +0200, David Hildenbrand wrote:
>> On 03.04.23 18:43, Peter Xu wrote:
>>> On Mon, Apr 03, 2023 at 09:59:50AM +0200, David Hildenbrand wrote:
>>>> There is ksft_print_msg, ksft_test_result, ksft_test_result_fail, ... do we
>>>> maybe want to convert properly to ksft while already at it?
>>>
>>> Yes, I started with trying to use that but found that there're not a lot of
>>> things that I can leverage.
>>>
>>> Starting with ksft_set_plan() - I think this is something we call first. I
>>> want the current unit test to skip everything if UFFD API test failed here,
>>> then I need to feed in a dynamic number of "plan" into ksft_set_plan().
>>> But I never know after I ran the 1st test..
>>
>> In cow.c I did that. Getting the number of tests right can be challenging
>> indeed.
> 
> IMHO the major thing is not about not easy to set, it's about there's
> merely no benefit I can see of having that calculated at the start of a
> test.

Thinking about it, I believe I figured out why it makes sense. By 
specifying upfront how many tests you intend to run, the framework can 
check if you have the right number of pass/fail/skip during that test 
case execution.

This requires a different way of writing tests: each test case is 
supposed to trigger the exact same number of pass/fail/skip on each 
possible path.

If the numbers don't add up, it could indicate a possible bug in your 
tests. For example, not triggering a fail on some exit path. While your 
test execution might indicate "success", there is actually a hidden 
issue in your test.

I started using the framework for all new tests, because I think it's 
quite nice and at least things will be a bit consistent and possibly 
tests easier to maintain.


Having that said, I can understand why one might not want to use it. And 
that there are some things in there that might be improved.

> 
> There's one thing it can do, that is when calling ksft_finished() it can be
> used to know whether all tests are run, but sadly here we're calculating
> everything just to make it match.. so it loses its last purpose.. IMHO.
> 
>>
>> Basic "feature availability" checks would go first (is uffd even around?),
>> and depending on that you can set the plan.
>>
>> For everything else, you can skip instead of test, so it will still be
>> accounted towards the plan.
>>
>>>
>>> I can call ksft_set_plan() later than this, but it misses a few tests which
>>> also looks weird.
>>
>> Yeah, it would be nice to simply make ksft_set_plan() optional. For example,
>> make ksft_print_cnts() skip the comparison if ksft_plan == 0. At least
>> ksft_exit_skip() handles that already in a descend way (below).
>>
>>>
>>> It also seems to not really help anything at all and not obvious to use.
>>> E.g. ksft_finished() will reference ksft_plan then it'll trigger
>>> ksft_exit_fail() but here I want to make it SKIP if the 1st test failed
>>> simply because the kernel probably doesn't have CONFIG_USERFAULTFD.
>>
>> You'd simply do that availability check first and then use ksft_exit_skip()
>> in case not available I guess.
>>
>>>
>>> Another example: I never figured what does x{fail|pass|skip} meant in the
>>> header..  e.g. ksft_inc_xfail_cnt() is used nowhere so I cannot reference
>>> either.  Then I don't know when I should increase them.
>>
>> In cow.c I have the following flow:
>>
>> ksft_print_header();
>> ksft_set_plan();
>> ... tests ...
>> err = ksft_get_fail_cnt();
>> if (err)
>> 	ksft_exit_fail_msg();
>> return ksft_exit_pass();
>>
>> That gives me:
>>
>> # [INFO] detected THP size: 2048 KiB
>> # [INFO] detected hugetlb size: 2048 KiB
>> # [INFO] detected hugetlb size: 1048576 KiB
>> # [INFO] huge zeropage is enabled
>> TAP version 13
>> 1..190
>> ...
>> # Totals: pass:87 fail:0 xfail:0 xpass:0 skip:103 error:0
>>
>>
>> I didn't use xfail or xpass so far, but what I understood is that these are
>> "expected failures" and "expected passes". fail/pass/skip are straight
> 
> Yes, xfail can be expressed that way, but maybe not xpass?  Otherwise it's
> hard to identify what's the difference between xpass and pass, because IIUC
> pass also means "expected to pass".
> 

I'm a simple man, I use pass/fail/skip in my tests. But let's try 
figuring that out; a quick internet search (no idea how trustworthy) 
tells me that I was wrong about xpass:

xfail: expected failure
xpass: unexpected pass

it essentially is:

xfail -> pass
xpass -> fail

but with a slight semantic difference when thinking about a test case:

ret = mmap(0, 0 ...);
if (ret == MAP_FAILED) {
	XFAIL();
} else {
	XPASS();
}

vs.

ret = mmap(0, 0 ...);
if (ret == MAP_FAILED) {
	PASS();
} else {
	FAIL();
}

It's all inherited from other testing frameworks I think. And xpass 
seems to be completely unused and xfail mostly unused.

So I wouldn't worry about that and simply use pass/fail/skip.

>> forward.
>> ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip() are
>> used to set them.
>>
>> You'd do availability checks before ksft_set_plan() and fail with a
>> ksft_exit_skip() if the kernel doesn't support it. Then, you'd just use
>> ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip().
>>
>>>
>>> In short, to make the unit test behave as expected, I figured I'll just
>>> write these few helpers and that's good enough for this unit test.  That
>>> takes perhaps 5 min anyway and isn't hugely bad for an unit test.
>>>
>>> Then I keep the exit code matching kselftests (KSFT_SKIP, etc.).
>>>
>>> What I can do here, though, is at least reuse the counters, e.g:
>>>
>>>    ksft_inc_pass_cnt() / ksft_inc_fail_cnt()
>>>
>>> There's no ksft_inc_skip_cnt() so, maybe, I can just reuse
>>> ksft_inc_xskip_cnt() assuming that counts "skip"s?
>>>
>>> Let me know if you have better ideas, I'll be happy to switch in that case.
>>
>> I guess once you start manually increasing/decreasing the cnt, you might be
>> abusing the ksft framework indeed and are better off handling it differently
>> :D
> 
> I'm serious considering that to address your comment here, to show that I'm
> trying my best to use whatever can help in this test case. :) Here reusing

:) appreciated, but don't let my comments distract you. If you don't 
think ksft is a good fit (or any good), then don't use it.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ