lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <2a6aba4c-e5df-20ec-8742-dffe0c645201@solarflare.com> Date: Mon, 29 Apr 2019 11:43:40 +0100 From: Edward Cree <ecree@...arflare.com> To: Alexei Starovoitov <alexei.starovoitov@...il.com> CC: Jiong Wang <jiong.wang@...ronome.com>, Alexei Starovoitov <ast@...nel.org>, <daniel@...earbox.net>, <netdev@...r.kernel.org>, <bpf@...r.kernel.org>, Jakub Kicinski <jakub.kicinski@...ronome.com>, "oss-drivers@...ronome.com" <oss-drivers@...ronome.com> Subject: Re: 32-bit zext time complexity (Was Re: [PATCH bpf-next] selftests/bpf: two scale tests) On 27/04/2019 04:11, Alexei Starovoitov wrote: > instead of converting all insns into lists of 1 before all patching > it can be done on demand: > convert from insn to list only when patching is needed. Makes sense. > Patched insn becomes a pointer to a block of new insns. > We have reserved opcodes to recognize such situation. It's not clear to me where you can fit everything though. The pointer is 64 bits, which is the same as struct bpf_insn. Are you suggesting relying on kernel pointers always starting 0xff? > The question is how to linearise it once at the end? Walk the old prog once to calculate out_insn_idx for each in_insn (since we will only ever be jumping to the first insn of a list (or to a non-list insn), that's all we need), as well as out_len. Allocate enough pages for out_len (let's not try to do any of this in-place, that would be painful), then walk the old prog to copy it insn-by-insn into the new one, recalculating any jump offsets by looking up the dest insn's out_insn_idx and subtracting our own out_insn_idx (plus an offset if we're not the first insn in the list of course). While we're at it we can also fix up e.g. linfo[].insn_off: if in_insn_idx matches linfo[li_idx].insn_off, then set linfo[li_idx++].insn_off = out_insn_idx. If we still need aux_data at this point we can copy that across too. Runtime O(out_len), and gets rid of all the adjusts on patch_insn_single — branches, linfo, subprog_starts, aux_data. Have I missed anything? If I have time I'll put together an RFC patch in the next few days. -Ed
Powered by blists - more mailing lists