[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877e9kgd39.fsf@netronome.com>
Date: Mon, 17 Jun 2019 21:40:26 +0100
From: Jiong Wang <jiong.wang@...ronome.com>
To: Edward Cree <ecree@...arflare.com>,
Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andriin@...com>
Cc: Jiong Wang <jiong.wang@...ronome.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
"Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
Daniel Borkmann <daniel@...earbox.net>,
bpf <bpf@...r.kernel.org>,
Network Development <netdev@...r.kernel.org>,
Michael Ellerman <mpe@...erman.id.au>,
Jakub Kicinski <jakub.kicinski@...ronome.com>
Subject: Re: [PATCH] bpf: optimize constant blinding
Edward Cree writes:
> On 17/06/2019 20:59, Jiong Wang wrote:
>> Edward Cree writes:
>>
>>> On 14/06/2019 16:13, Jiong Wang wrote:
>>>> Just an update and keep people posted.
>>>>
>>>> Working on linked list based approach, the implementation looks like the
>>>> following, mostly a combine of discussions happened and Naveen's patch,
>>>> please feel free to comment.
>>>>
>>>> - Use the reserved opcode 0xf0 with BPF_ALU as new pseudo insn code
>>>> BPF_LIST_INSN. (0xf0 is also used with BPF_JMP class for tail call).
>>>>
>>>> - Introduce patch pool into bpf_prog->aux to keep all patched insns.
>>> It's not clear to me what the point of the patch pool is, rather than just
>>> doing the patch straight away.
>> I used pool because I was thinking insn to be patched could be high
>> percentage, so doing lots of alloc call is going to be less efficient? so
>> allocate a big pool, and each time when creating new patch node, allocate
>> it from the pool directly. Node is addressed using pool_base + offset, each
>> node only need to keep offset.
> Good idea; but in that case it doesn't need to be a pool of patches (storing
> their prev and next), just a pool of insns. I.e. struct bpf_insn pool[many];
> then in orig prog when patching an insn replace it with BPF_LIST_INSN. If
> we later decide to patch an insn within a patch, we can replace it (i.e. the
> entry in bpf_insn_pool) with another BPF_LIST_INSN pointing to some later bit
> of the pool, then we just have a little bit of recursion at linearise time.
> Does that work?
I feel it is not going to work well. What I have proposed initially is
something similar, except when we are patching an insn within a patch (do
exist, for example zext insertion will apply to some patched alu insn
inserted in ctx convert pass), then we split the patch, this then makes the
data structures used in two shapes,
1. original prog->insnsi is still maintained as array.
2. if one insn is BPF_LIST_INSN, then it is a list, and could traverse it
using pool_base + insn.imm = list head.
But then there is data structure inconsistent, so now when doing insn
traversal we need something like:
for (idx = 0; idx < insn_cnt; idx++) {
if (insns[idx] is not BPF_LIST_INSN) {
do_insn(...)
}
else if (insns[idx] is BPF_LIST_INSN) {
list = pool_base + insn.imm;
while (list) {
insn = list_head->insn;
do_insn(...)
list = pool_base + list->next;
}
}
}
Code logic inside convert_ctx_accesses/fixup_bpf_call etc needs to be
re-factored out into a separate function do_NNN so it could be called
in above logic.
Now if we don't split patch when patch an insn inside patch, instead, if we
replace the patched insn using what you suggested, then the logic looks to
me becomes even more complex, something like
for (idx = 0; idx < insn_cnt; idx++) {
if (insns[idx] is not BPF_LIST_INSN) {
do_insn(...)
}
else if (insns[idx] is BPF_LIST_INSN) {
list = pool_base + insn.imm;
while (list) {
insn = list_head->insn;
if (insn is BF_LIST_INSN) {
sub_list = ...
while ()
do_insn()
continue;
}
do_insn(...)
list = pool_base + list->next;
}
}
}
So, I am thinking what Alexei and Andrii suggested make sense, just use
single data structure (singly linked list) to represent everything, so the
insn traversal etc could be simple, but I am considering it is better to
still base the list on top of the pool infrastructure mentioned?
I have somethingn like the following:
+struct bpf_list_insn {
+ struct bpf_insn insn;
+ u32 next;
+};
struct bpf_prog_aux {:
+ struct {
+ struct bpf_list_insn *pool;
+ u32 size;
+ u32 used;
+ };
Whenever you want to do intensive patching work you could call
bpf_patch_init to setup the pool, the setup including:
1. convert the original prog->insnsi into a list of bpf_list_insn.
next is index into the pool, so for those initial insnsi, they are
1, 2, 3, 4, 5, ...
Then when doing patching, just allocate new slot from the pool, and link
them into the list.
When patching finished call bpf_patch_fini to do lineraize, could use the
same algo in Navee's patch.
After digest Alexei and Andrii's reply, I still don't see the need to turn
branch target into list, and I am not sure whether pool based list sound
good? it saves size, resize pool doesn't invalid allocated node (the offset
doesn't change) but requires one extra addition to calculate the pointer.
>
> -Ed
Powered by blists - more mailing lists