[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <78EFC9DD-48A2-49BB-8C76-1E6FDE808067@redhat.com>
Date: Tue, 28 Apr 2020 12:47:53 +0200
From: "Eelco Chaudron" <echaudro@...hat.com>
To: "Alexei Starovoitov" <alexei.starovoitov@...il.com>
Cc: "Yonghong Song" <yhs@...com>, bpf <bpf@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>,
"Network Development" <netdev@...r.kernel.org>,
"Alexei Starovoitov" <ast@...nel.org>,
"Daniel Borkmann" <daniel@...earbox.net>,
"Martin KaFai Lau" <kafai@...com>,
"Song Liu" <songliubraving@...com>,
"Andrii Nakryiko" <andriin@...com>
Subject: Re: [RFC PATCH bpf-next 0/3] bpf: add tracing for XDP programs using
the BPF_PROG_TEST_RUN API
On 28 Apr 2020, at 6:04, Alexei Starovoitov wrote:
> On Fri, Apr 24, 2020 at 02:29:56PM +0200, Eelco Chaudron wrote:
>>
>>> Not working with JIT-ed code is imo red flag for the approach as
>>> well.
>>
>> How would this be an issue, this is for the debug path only, and if
>> the
>> jitted code behaves differently than the interpreter there is a
>> bigger
>> issue.
>
> They are different already. Like tail_calls cannot mix and match
> interpreter
> and JITed. Similar with bpf2bpf calls.
> And that difference will be growing further.
> At that time of doing bpf trampoline I considering dropping support
> for
> interpreter, but then figured out a relatively cheap way of keeping it
> alive.
> I expect next feature to not support interpreter.
If the goal is to face out the interpreter then I have to agree it does
not make sense to add this facility based on it…
>>> When every insn is spamming the logs the only use case I can see
>>> is to feed the test program with one packet and read thousand lines
>>> dump.
>>> Even that is quite user unfriendly.
>>
>> The log was for the POC only, the idea is to dump this in a user
>> buffer, and
>> with the right tooling (bpftool prog run ... {trace}?) it can be
>> stored in
>> an ELF file together with the program, and input/output. Then it
>> would be
>> easy to dump the C and eBPF program interleaved as bpftool does. If
>> GDB
>> would support eBPF, the format I envision would be good enough to
>> support
>> the GDB record/replay functionality.
>
> For the case you have in mind no kernel changes are necessary.
> Just run the interpreter in user space.
> It can be embedded in gdb binary, for example.
I do not believe a user-space approach would work, as you need support
for all helpers (and make sure they behave specifically to the kernel
version), as well you need all maps/memory available.
> Especially if you don't want to affect production server you
> definitely
> don't want to run anything on that machine.
With affecting production server I was not hinting towards some
performance degradation/CPU/memory usage, but not affecting any of the
traffic streams by inserting another packet into the network.
> As support person just grab the prog, capture the traffic and debug
> on their own server.
>
>>
>>> How about enabling kprobe in JITed code instead?
>>> Then if you really need to trap and print regs for every instruction
>>> you
>>> can
>>> still do so by placing kprobe on every JITed insn.
>>
>> This would even be harder as you need to understand the
>> ASM(PPC/ARM/x86) to
>> eBPF mapping (registers/code), where all you are interested in is
>> eBPF (to
>> C).
>
> Not really. gdb-like tool will hide all that from users.
Potentially yes if we get support for this in any gdb-like tool.
>> This kprobe would also affect all the instances of the program
>> running in
>> the system, i.e. for XDP, it could be assigned to all interfaces in
>> the
>> system.
>
> There are plenty of ways to solve that.
> Such kprobe in a prog can be gated by test_run cmd only.
> Or the prog .text can be cloned into new one and kprobed there.
Ack
>> And for this purpose, you are only interested in the results of a run
>> for a
>> specific packet (in the XDP use case) using the BPF_RUN_API so you
>> are not
>> affecting any live traffic.
>
> The only way to not affect live traffic is to provide support on
> a different machine.
See above
>>> But in reality I think few kprobes in the prog will be enough
>>> to debug the program and XDP prog may still process millions of
>>> packets
>>> because your kprobe could be in error path and the user may want to
>>> capture only specific things when it triggers.
>>> kprobe bpf prog will execute in such case and it can capture
>>> necessary
>>> state from xdp prog, from packet or from maps that xdp prog is
>>> using.
>>> Some sort of bpf-gdb would be needed in user space.
>>> Obviously people shouldn't be writing such kprob-bpf progs that
>>> debug
>>> other bpf progs by hand. bpf-gdb should be able to generate them
>>> automatically.
>>
>> See my opening comment. What you're describing here is more when the
>> right
>> developer has access to the specific system. But this might not even
>> be
>> possible in some environments.
>
> All I'm saying that kprobe is a way to trace kernel.
> The same facility should be used to trace bpf progs.
perf doesn’t support tracing bpf programs, do you know of any tools
that can, or you have any examples that would do this?
>>
>> Let me know if your opinion on this idea changes after reading this,
>> or what
>> else is needed to convince you of the need ;)
>
> I'm very much against hacking in-kernel interpreter into register
> dumping facility.
If the goal is to eventually remove the interpreter and not even adding
new features to it I agree it does not make sense to continue this way.
> Either use kprobe+bpf for programmatic tracing or intel's pt for pure
> instruction trace.
Powered by blists - more mailing lists