[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzaE-KiW1Xt049A4s25YiaLeTH3yhgahwLUdpXNjF1sVpA@mail.gmail.com>
Date: Wed, 14 Aug 2019 12:30:31 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Stanislav Fomichev <sdf@...gle.com>
Cc: Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andriin@...com>
Subject: Re: [PATCH bpf-next 2/4] selftests/bpf: test_progs: test__skip
On Wed, Aug 14, 2019 at 12:22 PM Andrii Nakryiko
<andrii.nakryiko@...il.com> wrote:
>
> On Wed, Aug 14, 2019 at 9:48 AM Stanislav Fomichev <sdf@...gle.com> wrote:
> >
> > Export test__skip() to indicate skipped tests and use it in
> > test_send_signal_nmi().
> >
> > Cc: Andrii Nakryiko <andriin@...com>
> > Signed-off-by: Stanislav Fomichev <sdf@...gle.com>
> > ---
>
> For completeness, we should probably also support test__skip_subtest()
> eventually, but it's fine until we don't have a use case.
Hm.. so I think we don't need separate test__skip_subtest().
test__skip() should skip either test or sub-test, depending on which
context we are running in. So maybe just make sure this is handled
correctly?
>
> Acked-by: Andrii Nakryiko <andriin@...com>
>
> > tools/testing/selftests/bpf/prog_tests/send_signal.c | 1 +
> > tools/testing/selftests/bpf/test_progs.c | 9 +++++++--
> > tools/testing/selftests/bpf/test_progs.h | 2 ++
> > 3 files changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > index 1575f0a1f586..40c2c5efdd3e 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > @@ -204,6 +204,7 @@ static int test_send_signal_nmi(void)
> > if (errno == ENOENT) {
> > printf("%s:SKIP:no PERF_COUNT_HW_CPU_CYCLES\n",
> > __func__);
> > + test__skip();
> > return 0;
> > }
> > /* Let the test fail with a more informative message */
> > diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
> > index 1a7a2a0c0a11..1993f2ce0d23 100644
> > --- a/tools/testing/selftests/bpf/test_progs.c
> > +++ b/tools/testing/selftests/bpf/test_progs.c
> > @@ -121,6 +121,11 @@ void test__force_log() {
> > env.test->force_log = true;
> > }
> >
> > +void test__skip(void)
> > +{
> > + env.skip_cnt++;
> > +}
> > +
> > struct ipv4_packet pkt_v4 = {
> > .eth.h_proto = __bpf_constant_htons(ETH_P_IP),
> > .iph.ihl = 5,
> > @@ -535,8 +540,8 @@ int main(int argc, char **argv)
> > test->test_name);
> > }
> > stdio_restore();
> > - printf("Summary: %d/%d PASSED, %d FAILED\n",
> > - env.succ_cnt, env.sub_succ_cnt, env.fail_cnt);
> > + printf("Summary: %d/%d PASSED, %d SKIPPED, %d FAILED\n",
So because some sub-tests might be skipped, while others will be
running, let's keep output consistent with SUCCESS and use
<test>/<subtests> format for SKIPPED as well?
> > + env.succ_cnt, env.sub_succ_cnt, env.skip_cnt, env.fail_cnt);
> >
> > free(env.test_selector.num_set);
> > free(env.subtest_selector.num_set);
> > diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
> > index 37d427f5a1e5..9defd35cb6c0 100644
> > --- a/tools/testing/selftests/bpf/test_progs.h
> > +++ b/tools/testing/selftests/bpf/test_progs.h
> > @@ -64,6 +64,7 @@ struct test_env {
> > int succ_cnt; /* successful tests */
> > int sub_succ_cnt; /* successful sub-tests */
> > int fail_cnt; /* total failed tests + sub-tests */
> > + int skip_cnt; /* skipped tests */
> > };
> >
> > extern int error_cnt;
> > @@ -72,6 +73,7 @@ extern struct test_env env;
> >
> > extern void test__force_log();
> > extern bool test__start_subtest(const char *name);
> > +extern void test__skip(void);
> >
> > #define MAGIC_BYTES 123
> >
> > --
> > 2.23.0.rc1.153.gdeed80330f-goog
> >
Powered by blists - more mailing lists