lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW5mZQ-N7RCndxP0RNi669RU5Tbu-Uu0M-KW2-mPYZbbng@mail.gmail.com>
Date:   Mon, 7 Mar 2022 16:56:57 -0800
From:   Song Liu <song@...nel.org>
To:     Benjamin Tissoires <benjamin.tissoires@...hat.com>
Cc:     Greg KH <gregkh@...uxfoundation.org>,
        Jiri Kosina <jikos@...nel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>, Shuah Khan <shuah@...nel.org>,
        Dave Marchevsky <davemarchevsky@...com>,
        Joe Stringer <joe@...ium.io>,
        Tero Kristo <tero.kristo@...ux.intel.com>,
        open list <linux-kernel@...r.kernel.org>,
        "open list:HID CORE LAYER" <linux-input@...r.kernel.org>,
        Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        linux-kselftest@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 02/28] bpf: introduce hid program type

On Mon, Mar 7, 2022 at 10:39 AM Benjamin Tissoires
<benjamin.tissoires@...hat.com> wrote:
>
> On Sat, Mar 5, 2022 at 1:03 AM Song Liu <song@...nel.org> wrote:
> >
> > On Fri, Mar 4, 2022 at 9:31 AM Benjamin Tissoires
> > <benjamin.tissoires@...hat.com> wrote:
> > >
> > > HID is a protocol that could benefit from using BPF too.
> >
> > [...]
> >
> > > +#include <linux/list.h>
> > > +#include <linux/slab.h>
> > > +
> > > +struct bpf_prog;
> > > +struct bpf_prog_array;
> > > +struct hid_device;
> > > +
> > > +enum bpf_hid_attach_type {
> > > +       BPF_HID_ATTACH_INVALID = -1,
> > > +       BPF_HID_ATTACH_DEVICE_EVENT = 0,
> > > +       MAX_BPF_HID_ATTACH_TYPE
> >
> > Is it typical to have different BPF programs for different attach types?
> > Otherwise, (different types may have similar BPF programs), maybe
> > we can pass type as an argument to the program (shared among
> > different types)?
>
> Not quite sure I am entirely following you, but I consider the various
> attach types to be quite different and thus you can not really reuse
> the same BPF program with 2 different attach types.
>
> In my view, we have 4 attach types:
> - BPF_HID_ATTACH_DEVICE_EVENT: called whenever we receive an IRQ from
> the given device (so this is net-like event stream)
> - BPF_HID_ATTACH_RDESC_FIXUP: there can be only one of this type, and
> this is called to change the device capabilities. So you can not reuse
> the other programs for this one
> - BPF_HID_ATTACH_USER_EVENT: called explicitly by the userspace
> process owning the program. There we can use functions that are
> sleeping (we are not in IRQ context), so this is also fundamentally
> different from the 3 others.
> - BPF_HID_ATTACH_DRIVER_EVENT: whenever the driver gets called into,
> we get a bpf program run. This can be suspend/resume, or even specific
> request to the device (change a feature on the device or get its
> current state). Again, IMO fundamentally different from the others.
>
> So I'm open to any suggestions, but if we can keep the userspace API
> being defined with different SEC in libbpf, that would be the best.

Thanks for this information. Different attach_types sound right for the use
case.

>
> >
> > [...]
> >
> > > +struct hid_device;
> > > +
> > > +enum hid_bpf_event {
> > > +       HID_BPF_UNDEF = 0,
> > > +       HID_BPF_DEVICE_EVENT,           /* when attach type is BPF_HID_DEVICE_EVENT */
> > > +};
> > > +
> > > +struct hid_bpf_ctx {
> > > +       enum hid_bpf_event type;        /* read-only */
> > > +       __u16 allocated_size;           /* the allocated size of data below (RO) */
> >
> > There is a (6-byte?) hole here.
> >
> > > +       struct hid_device *hdev;        /* read-only */
> > > +
> > > +       __u16 size;                     /* used size in data (RW) */
> > > +       __u8 data[];                    /* data buffer (RW) */
> > > +};
> >
> > Do we really need hit_bpf_ctx in uapi? Maybe we can just use it
> > from vmlinuxh?
>
> I had a thought at this context today, and I think I am getting to the
> limit of what I understand.
>
> My first worry is that the way I wrote it there, with a variable data
> field length is that this is not forward compatible. Unless BTF and
> CORE are making magic, this will bite me in the long run IMO.
>
> But then, you are talking about not using uapi, and I am starting to
> wonder: am I doing the things correctly?
>
> To solve my first issue (and the weird API I had to introduce in the
> bpf_hid_get/set_data), I came up to the following:
> instead of exporting the data directly in the context, I could create
> a helper bpf_hid_get_data_buf(ctx, const uint size) that returns a
> RET_PTR_TO_ALLOC_MEM_OR_NULL in the same way bpf_ringbuf_reserve()
> does.
>
> This way, I can directly access the fields within the bpf program
> without having to worry about the size.
>
> But now, I am wondering whether the uapi I defined here is correct in
> the way CORE works.
>
> My goal is to have HID-BPF programs to be CORE compatible, and not
> have to recompile them depending on the underlying kernel.
>
> I can not understand right now if I need to add some other BTF helpers
> in the same way the access to struct xdp_md and struct xdp_buff are
> converted between one and other, or if defining a forward compatible
> struct hid_bpf_ctx is enough.
> As far as I understand, .convert_ctx_access allows to export a stable
> uapi to the bpf prog users with the verifier doing the conversion
> between the structs for me. But is this really required for all the
> BPF programs if we want them to be CORE?
>
> Also, I am starting to wonder if I should not hide fields in the
> context to the users. The .data field could be a pointer and only
> accessed through the helper I mentioned above. This would be forward
> compatible, and also allows to use whatever available memory in the
> kernel to be forwarded to the BPF program. This way I can skip the
> memcpy part and work directly with the incoming dma data buffer from
> the IRQ.
>
> But is it best practice to do such a thing?

I think .convert_ctx_access is the way to go if we want to access the data
buffer without memcpy. I am not sure how much work is needed to make
it compatible with CORE though.

To make sure I understand the case, do we want something like

bpf_prog(struct hid_bpf_ctx *ctx)
{
    /* makes sure n < ctx->size */
    x = ctx->data[n]; /* read data */
    ctx->data[n] = <something>; /* write data */
    ctx->size = <something <= n>; /* change data size */
}

We also need it to be CORE, so that we may modify hid_bpf_ctx by
inserting more members to it before data.

Is this accurate?

Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ