lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 17 Mar 2012 21:14:55 +1100
From:	"Indan Zupancic" <>
To:	"Eric Dumazet" <>
Cc:	"Will Drewry" <>,,,,,,,,,,,,,,,,,,,,,,,,,,
Subject: Re: [PATCH v14 01/13] sk_run_filter: add BPF_S_ANC_SECCOMP_LD_W

On Wed, March 14, 2012 19:05, Eric Dumazet wrote:
> Le mercredi 14 mars 2012 à 08:59 +0100, Indan Zupancic a écrit :
>> The only remaining question is, is it worth the extra code to release
>> up to 32kB of unused memory? It seems a waste to not free it, but if
>> people think it's not worth it then let's just leave it around.
> Quite frankly its not an issue, given JIT BPF is not yet default
> enabled.

And what if assuming JIT BPF would be default enabled?

> I am not sure all bugs were found and fixed. I would warn users before
> considering using it in production.
> If you have time, I would appreciate if you could double check and find
> last bugs in it.

I'll do my best.

The current JIT doesn't handle negative offsets: The stuff that's handled
by __load_pointer(). Easiest solution would be to make it non-static and
call it instead of doing bpf_error. I guess __load_pointer was added later
and the JIT code didn't get updated.

But gcc refuses to inline load_pointer, instead it inlines __load_pointer
and does the important checks first. Considering the current assembly code
does a call too, it could as well call load_pointer() directly. That would
save a lot of assembly code, handle all negative cases too and be pretty
much the same speed. The only question is if this slow down some other
archs than x86. What do you think?

The EMIT_COND_JMP(f_op, f_offset); should be in an else case, otherwise
it's superfluous. It's a harmless bug though. I haven't spotted anything
else yet.

You can get rid of all the "if (is_imm8(offsetof(struct sk_buff, len)))"
code by making sure everything is near: Somewhere at the start, just
add 127 to %rdi and a BUILD_BUG_ON(sizeof(struct sk_buff) > 255).

I'll write some more patches tomorrow.



diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7c1b765..7e0f575 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -581,8 +581,9 @@ cond_branch:			f_offset = addrs[i + filter[i].jf] - addrs[i];
 					if (filter[i].jf)
+				} else {
+					EMIT_COND_JMP(f_op, f_offset);
-				EMIT_COND_JMP(f_op, f_offset);
 				/* hmm, too complex filter, give up with jit compiler */

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists