lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Jun 2019 20:47:24 +0100
From:   Edward Cree <ecree@...arflare.com>
To:     Jiong Wang <jiong.wang@...ronome.com>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>
CC:     "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        bpf <bpf@...r.kernel.org>,
        Network Development <netdev@...r.kernel.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        "Jakub Kicinski" <jakub.kicinski@...ronome.com>
Subject: Re: [PATCH] bpf: optimize constant blinding

On 14/06/2019 16:13, Jiong Wang wrote:
> Just an update and keep people posted.
>
> Working on linked list based approach, the implementation looks like the
> following, mostly a combine of discussions happened and Naveen's patch,
> please feel free to comment.
>
>   - Use the reserved opcode 0xf0 with BPF_ALU as new pseudo insn code
>     BPF_LIST_INSN. (0xf0 is also used with BPF_JMP class for tail call).
>
>   - Introduce patch pool into bpf_prog->aux to keep all patched insns.
It's not clear to me what the point of the patch pool is, rather than just
 doing the patch straight away.  Idea is that when prog is half-patched,
 insn idxs / jump offsets / etc. all refer to the original locations, only
 some of those might now be a list rather than a single insn.  Concretely:

struct bpf_insn_list { bpf_insn insn; struct list_head list; }
orig prog: array bpf_insn[3] = {cond jump +1, insn_foo, exit};
start of day verifier converts this into array bpf_insn_list[3],
 each with new.insn = orig.insn; INIT_LIST_HEAD(new.list)

verifier decides to patch insn_foo into {insn_bar, insn_baz}.
so we allocate bpf_insn_list *x;
insn_foo.insn = insn_bar;
x->insn = insn_baz;
list_add_tail(&x->list, &insn_foo.list);

the cond jump +1 is _not_ changed at this point, because it counts
 bpf_insn_lists and not insns.

Then at end of day to linearise we just have int new_idx[3];
 populate it by iterating over the array of bpf_insn_list and adding on
 the list length each time (so we get {0, 1, 3})
 then allocate output prog of the total length (here 4, calculated by
 that same pass as effectively off-the-end entry of new_idx)
 then iterate again to write out the output prog, when we see that 'cond
 jump +1' in old_idx 0 we see that (new_idx[2] - new_idx[0] - 1) = 2, so
 it becomes 'cond jump +2'.


This seems to me like it'd be easier to work with than making the whole
 program one big linked list (where you have to separately keep track of
 what each insn's idx was before patching).  But I haven't tried to code
 it yet, so anything could happen ;-)  It does rely on the assumption
 that only original insns (or the first insn of a list they get patched
 with) will ever be jump targets or otherwise need their insn_idx taken.

-Ed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ