lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 31 Oct 2017 12:37:14 +0000
From:   Edward Cree <ecree@...arflare.com>
To:     Alexei Starovoitov <ast@...com>,
        "David S . Miller" <davem@...emloft.net>
CC:     Daniel Borkmann <daniel@...earbox.net>,
        Martin KaFai Lau <kafai@...com>,
        Jakub Kicinski <jakub.kicinski@...ronome.com>,
        <netdev@...r.kernel.org>, <kernel-team@...com>
Subject: Re: [PATCH net-next] bpf: reduce verifier memory consumption

On 30/10/17 21:51, Alexei Starovoitov wrote:
> the verifier got progressively smarter over time and size of its internal
> state grew as well. Time to reduce the memory consumption.
> 
> Before:
> sizeof(struct bpf_verifier_state) = 6520
> After:
> sizeof(struct bpf_verifier_state) = 896
Nice!

> It's done by observing that majority of BPF programs use little to
> no stack whereas verifier kept all of 512 stack slots ready always.
> Instead dynamically reallocate struct verifier state when stack
> access is detected.
> Besides memory savings such approach gives few % runtime perf
> improvement as well and small reduction in number of processed insns:
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This worries me.  Not that improving pruning performance is a bad thing,
 but I don't see anything in this patch that should affect it, which
 suggests maybe a bug has been introduced that prunes too much?

> +static void copy_verifier_state(struct bpf_verifier_state *dst,
> +				const struct bpf_verifier_state *src)
> +{
> +	int stack = dst->allocated_stack;
> +
> +	if (stack < src->allocated_stack) {
> +		WARN_ON_ONCE(stack < src->allocated_stack);
Why not just
    if (WARN_ON_ONCE(stack < src->allocated_stack)) {
?

> +		/* internal bug, make state invalid to reject the program */
> +		memset(dst, 0, sizeof(*dst));
> +		return;
Any reason this function couldn't return int instead?

> +	}
> +	memcpy(dst, src, offsetof(struct bpf_verifier_state,
> +				  stack[src->allocated_stack / BPF_REG_SIZE]));
> +	dst->allocated_stack = stack;
> +}

Then again, I'm not entirely sure I like the idea of an array[0] at the
 end of the struct anyway (which is what makes this copy_verifier_state()
 necessary).  I'd be happier with a *array member pointing to a separate
 allocation; though that would still need separate copying in
 is_state_visited().  I also worry about a stale *parent somewhere
 pointing to the old state; I don't _think_ that can happen, but I'm not
 as sure of that as I'd like to be.  Using a *array instead would mean
 that &state doesn't change when we grow it.
Also, why kzalloc() a new state (in realloc_verifier_state()) rather than
 krealloc()ing the existing one and memset()ing the extra space?  That way
 you wouldn't need to then copy across all the old data.

>  static int pop_stack(struct bpf_verifier_env *env, int *prev_insn_idx)
>  {
> -	struct bpf_verifier_stack_elem *elem;
> +	struct bpf_verifier_state *cur = env->cur_state;
> +	struct bpf_verifier_stack_elem *elem, *head = env->head;
>  	int insn_idx;
>  
>  	if (env->head == NULL)
>  		return -1;
>  
> -	memcpy(&env->cur_state, &env->head->st, sizeof(env->cur_state));
> -	insn_idx = env->head->insn_idx;
> +	if (cur && cur->allocated_stack < head->st.allocated_stack) {
> +		cur = realloc_verifier_state(cur, head->st.allocated_stack);
> +		if (!cur)
> +			return -ENOMEM;
I don't think this does what you want.  Calling code in do_check() will
 get insn_idx = -ENOMEM, see that it's < 0, and break out of the loop.
 Prog will then be accepted without further checking.

-Ed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ