[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200904012909.c7cx5adhy5f23ovo@ast-mbp.dhcp.thefacebook.com>
Date: Thu, 3 Sep 2020 18:29:09 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: Stanislav Fomichev <sdf@...gle.com>,
Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
YiFei Zhu <zhuyifei@...gle.com>,
YiFei Zhu <zhuyifei1999@...il.com>
Subject: Re: [PATCH bpf-next v3 3/8] libbpf: Add BPF_PROG_BIND_MAP syscall
and use it on .metadata section
On Wed, Sep 02, 2020 at 07:31:33PM -0700, Andrii Nakryiko wrote:
> On Fri, Aug 28, 2020 at 12:37 PM Stanislav Fomichev <sdf@...gle.com> wrote:
> >
> > From: YiFei Zhu <zhuyifei@...gle.com>
> >
> > The patch adds a simple wrapper bpf_prog_bind_map around the syscall.
> > And when using libbpf to load a program, it will probe the kernel for
> > the support of this syscall, and scan for the .metadata ELF section
> > and load it as an internal map like a .data section.
> >
> > In the case that kernel supports the BPF_PROG_BIND_MAP syscall and
> > a .metadata section exists, the map will be explicitly bound to
> > the program via the syscall immediately after program is loaded.
> > -EEXIST is ignored for this syscall.
>
> Here is the question I have. How important is it that all this
> metadata is in a separate map? What if libbpf just PROG_BIND_MAP all
> the maps inside a single BPF .o file to all BPF programs in that file?
> Including ARRAY maps created for .data, .rodata and .bss, even if the
> BPF program doesn't use any of the global variables? If it's too
> extreme, we could do it only for global data maps, leaving explicit
> map definitions in SEC(".maps") alone. Would that be terrible?
> Conceptually it makes sense, because when you program in user-space,
> you expect global variables to be there, even if you don't reference
> it directly, right? The only downside is that you won't have a special
> ".metadata" map, rather it will be part of ".rodata" one.
That's an interesting idea.
Indeed. If we have BPF_PROG_BIND_MAP command why do we need to create
another map that behaves exactly like .rodata but has a different name?
Wouldn't it be better to identify metadata elements some other way?
Like by common prefix/suffix name of the variables or
via grouping them under one structure with standard prefix?
Like:
struct bpf_prog_metadata_blahblah {
char compiler_name[];
int my_internal_prog_version;
} = { .compiler_name[] = "clang v.12", ...};
In the past we did this hack for 'version' and for 'license',
but we did it because we didn't have BTF and there was no other way
to determine the boundaries.
I think libbpf can and should support multiple rodata sections with
arbitrary names, but hardcoding one specific ".metadata" name?
Hmm. Let's think through the implications.
Multiple .o support and static linking is coming soon.
When two .o-s with multiple bpf progs are statically linked libbpf
won't have any choice but to merge them together under single
".metadata" section and single map that will be BPF_PROG_BIND_MAP-ed
to different progs. Meaning that metadata applies to final elf file
after linking. It's _not_ per program metadata.
May be we should talk about problem statement and goals.
Do we actually need metadata per program or metadata per single .o
or metadata per final .o with multiple .o linked together?
What is this metadata?
If it's just unreferenced by program read only data then no special names or
prefixes are needed. We can introduce BPF_PROG_BIND_MAP to bind any map to any
program and it would be up to tooling to decide the meaning of the data in the
map. For example, bpftool can choose to print all variables from all read only
maps that match "bpf_metadata_" prefix, but it will be bpftool convention only
and not hard coded in libbpf.
Powered by blists - more mailing lists