[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o8t0xl37.fsf@cloudflare.com>
Date: Fri, 13 Mar 2020 17:42:36 +0100
From: Jakub Sitnicki <jakub@...udflare.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Cc: bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
kernel-team@...udflare.com
Subject: Re: [PATCH bpf-next] selftests/bpf: Fix spurious failures in accept due to EAGAIN
On Thu, Mar 12, 2020 at 06:57 PM CET, Andrii Nakryiko wrote:
> Thanks for looking into this. Can you please verify that test
> successfully fails (not hangs) when, say, network is down (do `ip link
> set lo down` before running test?). The reason I'm asking is that I
> just fixed a problem in tcp_rtt selftest, in which accept() would
> block forever, even if listening socket was closed.
While on the topic writing network tests with test_progs.
There are a couple pain points because all tests run as one process:
1) resource cleanup on failure
Tests can't simply exit(), abort(), or error() on failure. Instead
they need to clean up all resources, like opened file descriptors and
memory allocations, and propagate the error up to the main test
function so it can return to the test runner.
2) terminating in timely fashion
We don't have an option of simply setting alarm() to terminate after
a reasnable timeout without worrying about I/O syscalls in blocking
mode being stuck.
Careful error and timeout handling makes test code more complicated that
it really needs to be, IMHO. Making writing as well as maintaing them
harder.
What if we extended test_progs runner to support process-per-test
execution model? Perhaps as an opt-in for selected tests.
Is that in line with the plans/vision for BPF selftests?
Powered by blists - more mailing lists