[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190214003523.zjbiwdgcvy7yrauo@ast-mbp>
Date: Wed, 13 Feb 2019 16:35:25 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Joe Stringer <joe@...d.net.nz>
Cc: bpf@...r.kernel.org, netdev <netdev@...r.kernel.org>,
Daniel Borkmann <daniel@...earbox.net>, ast@...nel.org
Subject: Re: [PATCH bpf-next 4/4] selftests/bpf: Test static data relocation
On Tue, Feb 12, 2019 at 12:43:21PM -0800, Joe Stringer wrote:
>
> Do you see any value in having incremental support in libbpf that
> could be used as a fallback for older kernels like in patch #2 of this
> series? I could imagine libbpf probing kernel support for
> global/static variables and attempting to handle references to .data
> via some more comprehensive mechanism in-kernel, or falling back to
> this approach if it is not available.
I don't think we have to view it as older vs new kernel and fallback discussion.
I think access to static vars can be implemented in libbpf today without
changing llvm and kernel.
For the following code:
static volatile __u32 static_data = 42;
SEC("anything")
int load_static_data(struct __sk_buff *skb)
{
__u32 value = static_data;
llvm will generate asm:
r1 = static_data ll
r1 = *(u32 *)(r1 + 0)
libbpf can replace first insn with r1 = 0 (or remove it altogether)
and second insn with r1 = 42 _when it is safe_.
If there was no volatile keyword llvm would have optimized
these two instructions into operation with immediate constant.
libbpf can do this opimization instead of llvm.
libbpf can check that 'static_data' is indeed not global in elf file
and there are no store operations in all programs in that elf file.
Then every load from that address can be replaced with rX=imm
of the value from data section.
libbpf would need to do data flow analysis which is substantial
feature addition. I think it's inevitable next step anyway.
The key point that this approach will be compatible with future
global variables and modifiable static variables.
In such case load/store instructions will stay as-is
and kernel support will be needed to replace 'r1 = static_data ll'
with properly marked ld_imm64 insn.
Powered by blists - more mailing lists