[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200121095614.GB707582@krava>
Date: Tue, 21 Jan 2020 10:56:14 +0100
From: Jiri Olsa <jolsa@...hat.com>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: Jiri Olsa <jolsa@...nel.org>, Alexei Starovoitov <ast@...nel.org>,
netdev@...r.kernel.org, bpf@...r.kernel.org,
Andrii Nakryiko <andriin@...com>, Yonghong Song <yhs@...com>,
Martin KaFai Lau <kafai@...com>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
David Miller <davem@...hat.com>,
Björn Töpel <bjorn.topel@...el.com>
Subject: Re: [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher
in unwind
On Tue, Jan 21, 2020 at 12:55:10AM +0100, Daniel Borkmann wrote:
> On 1/18/20 2:49 PM, Jiri Olsa wrote:
> > When unwinding the stack we need to identify each address
> > to successfully continue. Adding latch tree to keep trampolines
> > for quick lookup during the unwind.
> >
> > The patch uses first 48 bytes for latch tree node, leaving 4048
> > bytes from the rest of the page for trampoline or dispatcher
> > generated code.
> >
> > It's still enough not to affect trampoline and dispatcher progs
> > maximum counts.
> >
> > Signed-off-by: Jiri Olsa <jolsa@...nel.org>
> > ---
> > include/linux/bpf.h | 12 ++++++-
> > kernel/bpf/core.c | 2 ++
> > kernel/bpf/dispatcher.c | 4 +--
> > kernel/bpf/trampoline.c | 76 +++++++++++++++++++++++++++++++++++++----
> > 4 files changed, 84 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 8e3b8f4ad183..41eb0cf663e8 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -519,7 +519,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key);
> > int bpf_trampoline_link_prog(struct bpf_prog *prog);
> > int bpf_trampoline_unlink_prog(struct bpf_prog *prog);
> > void bpf_trampoline_put(struct bpf_trampoline *tr);
> > -void *bpf_jit_alloc_exec_page(void);
> > #define BPF_DISPATCHER_INIT(name) { \
> > .mutex = __MUTEX_INITIALIZER(name.mutex), \
> > .func = &name##func, \
> > @@ -551,6 +550,13 @@ void *bpf_jit_alloc_exec_page(void);
> > #define BPF_DISPATCHER_PTR(name) (&name)
> > void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
> > struct bpf_prog *to);
> > +struct bpf_image {
> > + struct latch_tree_node tnode;
> > + unsigned char data[];
> > +};
> > +#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image))
> > +bool is_bpf_image(void *addr);
> > +void *bpf_image_alloc(void);
> > #else
> > static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
> > {
> > @@ -572,6 +578,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
> > static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d,
> > struct bpf_prog *from,
> > struct bpf_prog *to) {}
> > +static inline bool is_bpf_image(void *addr)
> > +{
> > + return false;
> > +}
> > #endif
> > struct bpf_func_info_aux {
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 29d47aae0dd1..b3299dc9adda 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -704,6 +704,8 @@ bool is_bpf_text_address(unsigned long addr)
> > rcu_read_lock();
> > ret = bpf_prog_kallsyms_find(addr) != NULL;
> > + if (!ret)
> > + ret = is_bpf_image((void *) addr);
> > rcu_read_unlock();
>
> Btw, shouldn't this be a separate entity entirely to avoid unnecessary inclusion
> in bpf_arch_text_poke() for the is_bpf_text_address() check there?
right, we dont want poking in trampolines/dispatchers.. I'll change that
>
> Did you drop the bpf_{trampoline,dispatcher}_<...> entry addition in kallsyms?
working on that, will send it separately
jirka
Powered by blists - more mailing lists