lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b282519e-d43d-026c-3f96-4d7fb287d96b@redhat.com>
Date:   Mon, 3 Apr 2023 21:06:26 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Andrea Arcangeli <aarcange@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Nadav Amit <nadav.amit@...il.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Axel Rasmussen <axelrasmussen@...gle.com>,
        Leonardo Bras Soares Passos <lsoaresp@...hat.com>,
        Mike Rapoport <rppt@...ux.vnet.ibm.com>
Subject: Re: [PATCH 16/29] selftests/mm: UFFDIO_API test

On 03.04.23 18:43, Peter Xu wrote:
> On Mon, Apr 03, 2023 at 09:59:50AM +0200, David Hildenbrand wrote:
>> There is ksft_print_msg, ksft_test_result, ksft_test_result_fail, ... do we
>> maybe want to convert properly to ksft while already at it?
> 
> Yes, I started with trying to use that but found that there're not a lot of
> things that I can leverage.
> 
> Starting with ksft_set_plan() - I think this is something we call first. I
> want the current unit test to skip everything if UFFD API test failed here,
> then I need to feed in a dynamic number of "plan" into ksft_set_plan().
> But I never know after I ran the 1st test..

In cow.c I did that. Getting the number of tests right can be 
challenging indeed.

Basic "feature availability" checks would go first (is uffd even 
around?), and depending on that you can set the plan.

For everything else, you can skip instead of test, so it will still be 
accounted towards the plan.

> 
> I can call ksft_set_plan() later than this, but it misses a few tests which
> also looks weird.

Yeah, it would be nice to simply make ksft_set_plan() optional. For 
example, make ksft_print_cnts() skip the comparison if ksft_plan == 0. 
At least ksft_exit_skip() handles that already in a descend way (below).

> 
> It also seems to not really help anything at all and not obvious to use.
> E.g. ksft_finished() will reference ksft_plan then it'll trigger
> ksft_exit_fail() but here I want to make it SKIP if the 1st test failed
> simply because the kernel probably doesn't have CONFIG_USERFAULTFD.

You'd simply do that availability check first and then use 
ksft_exit_skip() in case not available I guess.

> 
> Another example: I never figured what does x{fail|pass|skip} meant in the
> header..  e.g. ksft_inc_xfail_cnt() is used nowhere so I cannot reference
> either.  Then I don't know when I should increase them.

In cow.c I have the following flow:

ksft_print_header();
ksft_set_plan();
... tests ...
err = ksft_get_fail_cnt();
if (err)
	ksft_exit_fail_msg();
return ksft_exit_pass();

That gives me:

# [INFO] detected THP size: 2048 KiB
# [INFO] detected hugetlb size: 2048 KiB
# [INFO] detected hugetlb size: 1048576 KiB
# [INFO] huge zeropage is enabled
TAP version 13
1..190
...
# Totals: pass:87 fail:0 xfail:0 xpass:0 skip:103 error:0


I didn't use xfail or xpass so far, but what I understood is that these 
are "expected failures" and "expected passes". fail/pass/skip are 
straight forward. 
ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip() 
are used to set them.

You'd do availability checks before ksft_set_plan() and fail with a 
ksft_exit_skip() if the kernel doesn't support it. Then, you'd just use 
ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip().

> 
> In short, to make the unit test behave as expected, I figured I'll just
> write these few helpers and that's good enough for this unit test.  That
> takes perhaps 5 min anyway and isn't hugely bad for an unit test.
> 
> Then I keep the exit code matching kselftests (KSFT_SKIP, etc.).
> 
> What I can do here, though, is at least reuse the counters, e.g:
> 
>   ksft_inc_pass_cnt() / ksft_inc_fail_cnt()
> 
> There's no ksft_inc_skip_cnt() so, maybe, I can just reuse
> ksft_inc_xskip_cnt() assuming that counts "skip"s?
> 
> Let me know if you have better ideas, I'll be happy to switch in that case.

I guess once you start manually increasing/decreasing the cnt, you might 
be abusing the ksft framework indeed and are better off handling it 
differently :D

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ