lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 7 Jan 2020 20:30:40 +0100
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Jiri Olsa <jolsa@...hat.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Alexei Starovoitov <ast@...nel.org>, netdev@...r.kernel.org,
        bpf@...r.kernel.org, Andrii Nakryiko <andriin@...com>,
        Yonghong Song <yhs@...com>, Martin KaFai Lau <kafai@...com>,
        Jakub Kicinski <jakub.kicinski@...ronome.com>,
        David Miller <davem@...hat.com>, bjorn.topel@...el.com
Subject: Re: [PATCH 5/5] bpf: Allow to resolve bpf trampoline in unwind

On 1/7/20 2:15 PM, Jiri Olsa wrote:
> On Tue, Jan 07, 2020 at 09:30:12AM +0100, Daniel Borkmann wrote:
>> On 1/7/20 12:46 AM, Alexei Starovoitov wrote:
>>> On Sun, Dec 29, 2019 at 03:37:40PM +0100, Jiri Olsa wrote:
>>>> When unwinding the stack we need to identify each
>>>> address to successfully continue. Adding latch tree
>>>> to keep trampolines for quick lookup during the
>>>> unwind.
>>>>
>>>> Signed-off-by: Jiri Olsa <jolsa@...nel.org>
>>> ...
>>>> +bool is_bpf_trampoline(void *addr)
>>>> +{
>>>> +	return latch_tree_find(addr, &tree, &tree_ops) != NULL;
>>>> +}
>>>> +
>>>>    struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
>>>>    {
>>>>    	struct bpf_trampoline *tr;
>>>> @@ -65,6 +98,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
>>>>    	for (i = 0; i < BPF_TRAMP_MAX; i++)
>>>>    		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
>>>>    	tr->image = image;
>>>> +	latch_tree_insert(&tr->tnode, &tree, &tree_ops);
>>>
>>> Thanks for the fix. I was thinking to apply it, but then realized that bpf
>>> dispatcher logic has the same issue.
>>> Could you generalize the fix for both?
>>> May be bpf_jit_alloc_exec_page() can do latch_tree_insert() ?
>>> and new version of bpf_jit_free_exec() is needed that will do latch_tree_erase().
>>> Wdyt?
>>
>> Also this patch is buggy since your latch lookup happens under RCU, but
>> I don't see anything that waits a grace period once you remove from the
>> tree. Instead you free the trampoline right away.
> 
> thanks, did not think of that.. will (try to) fix ;-)
> 
>> On a different question, given we have all the kallsym infrastructure
>> for BPF already in place, did you look into whether it's feasible to
>> make it a bit more generic to also cover JITed buffers from trampolines?
> 
> hum, it did not occur to me that we want to see it in kallsyms,
> but sure.. how about: bpf_trampoline_<key> ?
> 
> key would be taken from bpf_trampoline::key as function's BTF id

Yeap, I think bpf_trampoline_<btf_id> would make sense here.

Thanks,
Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ