[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6180198e-5086-40a2-bd0a-305009342020@linuxfoundation.org>
Date: Fri, 2 Aug 2024 17:10:52 -0600
From: Shuah Khan <skhan@...uxfoundation.org>
To: Muhammad Usama Anjum <usama.anjum@...labora.com>,
Shuah Khan <shuah@...nel.org>
Cc: Aleksa Sarai <cyphar@...har.com>, kernel@...labora.com,
linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org,
Shuah Khan <skhan@...uxfoundation.org>
Subject: Re: [PATCH v2] selftests: openat2: don't print total number of tests
and then skip
On 8/1/24 23:38, Muhammad Usama Anjum wrote:
> On 8/1/24 9:27 PM, Shuah Khan wrote:
>> On 8/1/24 02:42, Muhammad Usama Anjum wrote:
>>> On 7/31/24 9:57 PM, Shuah Khan wrote:
>>>> On 7/31/24 07:39, Muhammad Usama Anjum wrote:
>>>>> Don't print that 88 sub-tests are going to be executed, but then skip.
>>>>> This is against TAP compliance. Instead check pre-requisites first
>>>>> before printing total number of tests.
>>>>
>>>> Does TAP clearly mention this?
>>> Yes from https://testanything.org/tap-version-13-specification.html
>>>
>>> Skipping everything
>>> This listing shows that the entire listing is a skip. No tests were run.
>>>
>>> TAP version 13
>>> 1..0 # skip because English-to-French translator isn't installed
>>
>> I don't see how this is applicable to the current scenario. The user
>> needs to have root privilege to run the test.
>>
>> It is important to mention how many tests could have been run.
>> As mentioned before, this information is important for users and testers.
>>
>> I would like to see this information in the output.
>>
>>>
>>> We can see above that we need to print 1..0 and skip without printing the
>>> total number of tests to be executed as they are going to be skipped.
>>>
>>>>
>>>>>
>>>>> Old non-tap compliant output:
>>>>> TAP version 13
>>>>> 1..88
>>>>> ok 2 # SKIP all tests require euid == 0
>>>>> # Planned tests != run tests (88 != 1)>>> # Totals: pass:0
>>>>> fail:0 xfail:0 xpass:0 skip:1 error:0
>>>>>
>>>>> New and correct output:
>>>>> TAP version 13
>>>>> 1..0 # SKIP all tests require euid == 0
>>>>
>>>> The problem is that this new output doesn't show how many tests
>>>> are in this test suite that could be run.
>>>>
>>>> I am not use if this is better for communicating coverage information
>>>> even if meets the TAP compliance.
>>> I think the number of tests represents the number of planned tests. If we
>>> don't plan to run X number of tests, we shouldn't print it.
>>
>> 88 tests are planned to be run except for the fact the first check
>> failed.
>>
>> Planned tests could not be run because of user privileges. So these
>> tests are all skips because of unmet dependencies.
> Agreed.
>
>>
>> So the a good report would show that 88 tests could have been run. You
>> can meet the specification and still make it work for us. When we
>> adapt TAP 13 we didn't require 100% compliance.
>>
>> There are cases where you can comply and still provide how many test
>> could be run.
>>
>> I think you are applying the spec strictly thereby removing useful
>> information from the report.
>>
>> Can you tell me what would fail because of this "non-compliance"?
> Some months ago, someone had reported for one of my test that it says it is
> going to execute X number of tests. But then it just skips saying it
> couldn't run X tests and final footer of tests also didn't had the correct
> number of tests in it.
>
>> TAP version 13
>> 1..88
> This gives information that 88 tests are going to be executed.
>> ok 2 # SKIP all tests require euid == 0
I agree this should be 1 if we ran just one test.
> Why not ok 1 here?
>> # Planned tests != run tests (88 != 1)
> This gives a error occured signal instead of telling us that preconditions
> failed.
This is correct. We report skip and don't fail when dependencies aren't
met. If we did that it would be reporting false failures since tests
couldn't run tests.
I have asked Laura to see if we can add a message to tell the user
that we couldn't run tests that could have run so they can look at the
config and make changes as needed to increase coverage of testing.
>> # Totals: pass:0 fail:0 xfail:0 xpass:0 skip:1 error:0
This is correct - it shows 1 skip and rest are zero.
> The tests exit with KSFT_FAIL instead of KSFT_SKIP. This was the biggest
> concern from the report.
>
Exiting with KSFT_FAIL is the real problem that needs to be fixed.
Please take a look to see why this is and send me the fix.
thanks,
-- Shuah
Powered by blists - more mailing lists