lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 7 Oct 2019 10:47:19 -0700
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Daniel Borkmann <daniel@...earbox.net>
Cc:     Andrii Nakryiko <andriin@...com>, bpf <bpf@...r.kernel.org>,
        Networking <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...com>,
        Quentin Monnet <quentin.monnet@...ronome.com>,
        Kernel Team <kernel-team@...com>
Subject: Re: [PATCH v4 bpf-next 1/3] uapi/bpf: fix helper docs

On Mon, Oct 7, 2019 at 2:43 AM Daniel Borkmann <daniel@...earbox.net> wrote:
>
> On Sun, Oct 06, 2019 at 08:07:36PM -0700, Andrii Nakryiko wrote:
> > Various small fixes to BPF helper documentation comments, enabling
> > automatic header generation with a list of BPF helpers.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@...com>
> > ---
> >  include/uapi/linux/bpf.h       | 32 ++++++++++++++++----------------
> >  tools/include/uapi/linux/bpf.h | 32 ++++++++++++++++----------------
> >  2 files changed, 32 insertions(+), 32 deletions(-)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 77c6be96d676..a65c3b0c6935 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -794,7 +794,7 @@ union bpf_attr {
> >   *           A 64-bit integer containing the current GID and UID, and
> >   *           created as such: *current_gid* **<< 32 \|** *current_uid*.
>
> Overall, I do like the approach that we keep generating the BPF helpers header
> file from this documentation as it really enforces that the signatures here
> must be 100% correct, and given this also lands in the man page it is /always/
> in sync.
>
> > - * int bpf_get_current_comm(char *buf, u32 size_of_buf)
> > + * int bpf_get_current_comm(void *buf, u32 size_of_buf)
>
> You did not elaborate why this needs to change from char * to void *. What is
> the reason? Those rules should probably be documented somewhere, otherwise
> people might keep adding them.

So here and below for __u8*, compiler is much more strict about
**exact** type of pointer passed by program into helpers. E.g, in one
selftest, we had struct like this

struct s {
    char a[16];
};

struct s = {};
bpf_get_current_comm(&s.a);

and compiler was complaining that program passes char (*)[16] (pointer
to an array) instead of char *. So instead of forcing all the correct
program to do extra casts, I think it's better to stick to void * for
all the "pointer to a chunk of memory" use cases. With void *,
usability is much better.

>
> >   *   Description
> >   *           Copy the **comm** attribute of the current task into *buf* of
> >   *           *size_of_buf*. The **comm** attribute contains the name of
> > @@ -1023,7 +1023,7 @@ union bpf_attr {
> >   *           The realm of the route for the packet associated to *skb*, or 0
> >   *           if none was found.
> >   *
> > - * int bpf_perf_event_output(struct pt_regs *ctx, struct bpf_map *map, u64 flags, void *data, u64 size)
> > + * int bpf_perf_event_output(void *ctx, struct bpf_map *map, u64 flags, void *data, u64 size)
>
> This one here is because we have multiple program types with different input context.

Yes.

>
> >   *   Description
> >   *           Write raw *data* blob into a special BPF perf event held by
> >   *           *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. This perf
> > @@ -1068,7 +1068,7 @@ union bpf_attr {
> >   *   Return
> >   *           0 on success, or a negative error in case of failure.
> >   *
> > - * int bpf_skb_load_bytes(const struct sk_buff *skb, u32 offset, void *to, u32 len)
> > + * int bpf_skb_load_bytes(const void *skb, u32 offset, void *to, u32 len)
>
> Changing from struct sk_buff * to void * here, again due to struct sk_reuseport_kern *?

Yep.

>
> I'm wondering whether it would simply be much better to always just use 'void *ctx'
> for everything that is BPF context as it may be just confusing to people why different
> types are chosen sometimes leading to buggy drive-by attempts to 'fix' them back into
> struct sk_buff * et al.

I'm impartial on this issue. In some cases it might be helpful to
specify what is the expected type of the context, if it's only ever
one type, but there are lots of helpers that accept various contexts,
so for consistency its better to just have "void *context".

>
> >   *   Description
> >   *           This helper was provided as an easy way to load data from a
> >   *           packet. It can be used to load *len* bytes from *offset* from
> > @@ -1085,7 +1085,7 @@ union bpf_attr {
> >   *   Return
> >   *           0 on success, or a negative error in case of failure.
> >   *
> > - * int bpf_get_stackid(struct pt_regs *ctx, struct bpf_map *map, u64 flags)
> > + * int bpf_get_stackid(void *ctx, struct bpf_map *map, u64 flags)
> >   *   Description
> >   *           Walk a user or a kernel stack and return its id. To achieve
> >   *           this, the helper needs *ctx*, which is a pointer to the context
> > @@ -1154,7 +1154,7 @@ union bpf_attr {
> >   *           The checksum result, or a negative error code in case of
> >   *           failure.
> >   *
> > - * int bpf_skb_get_tunnel_opt(struct sk_buff *skb, u8 *opt, u32 size)
> > + * int bpf_skb_get_tunnel_opt(struct sk_buff *skb, void *opt, u32 size)
>
> Same here and in more places in this patch, why u8 * -> void * and the like?

See above, making compiler less picky about pointer to a memory buffer.

>
> >   *   Description
> >   *           Retrieve tunnel options metadata for the packet associated to
> >   *           *skb*, and store the raw tunnel option data to the buffer *opt*
> [...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ