lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 3 Mar 2020 10:03:19 -0800
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Jiri Olsa <jolsa@...nel.org>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Song Liu <songliubraving@...com>, netdev@...r.kernel.org,
        bpf@...r.kernel.org, Andrii Nakryiko <andriin@...com>,
        Yonghong Song <yhs@...com>, Martin KaFai Lau <kafai@...com>,
        Jakub Kicinski <kuba@...nel.org>,
        David Miller <davem@...hat.com>,
        Björn Töpel <bjorn.topel@...el.com>,
        John Fastabend <john.fastabend@...il.com>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Arnaldo Carvalho de Melo <acme@...hat.com>,
        Song Liu <song@...nel.org>
Subject: Re: [PATCH 06/15] bpf: Add bpf_ksym_tree tree

On Mon, Mar 02, 2020 at 03:31:45PM +0100, Jiri Olsa wrote:
> The bpf_tree is used both for kallsyms iterations and searching
> for exception tables of bpf programs, which is needed only for
> bpf programs.
> 
> Adding bpf_ksym_tree that will hold symbols for all bpf_prog
> bpf_trampoline and bpf_dispatcher objects and keeping bpf_tree
> only for bpf_prog objects to keep it fast.

...

>  static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux)
> @@ -616,6 +650,7 @@ static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux)
>  	WARN_ON_ONCE(!list_empty(&aux->ksym.lnode));
>  	list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms);
>  	latch_tree_insert(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops);
> +	latch_tree_insert(&aux->ksym.tnode, &bpf_ksym_tree, &bpf_ksym_tree_ops);
>  }
>  
>  static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux)
> @@ -624,6 +659,7 @@ static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux)
>  		return;
>  
>  	latch_tree_erase(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops);
> +	latch_tree_erase(&aux->ksym.tnode, &bpf_ksym_tree, &bpf_ksym_tree_ops);

I have to agree with Daniel here.
Having bpf prog in two latch trees is unnecessary.
Especially looking at the patch 7 that moves update to the other tree.
The whole thing becomes assymetrical and harder to follow.
Consider that walking extable is slow anyway. It's a page fault.
Having trampoline and dispatch in the same tree will not be measurable
on the speed of search_bpf_extables->bpf_prog_kallsyms_find.
So please consolidate.

Also I don't see a hunk that deletes tnode from 'struct bpf_image'.
These patches suppose to generalize it too, no?
And at the end kernel_text_address() suppose to call
is_bpf_text_address() only, right?
Instead of is_bpf_text_address() || is_bpf_image_address() ?
That _will_ actually speed up backtrace collection.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ