lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz3_N4JZZtJpWAsRBSLHv0tm4vtC4RT-r-USN0WhudMbig@mail.gmail.com>
Date:   Fri, 8 Nov 2019 20:17:53 +0100
From:   Magnus Karlsson <magnus.karlsson@...il.com>
To:     William Tu <u9012063@...il.com>
Cc:     Magnus Karlsson <magnus.karlsson@...el.com>,
        Björn Töpel <bjorn.topel@...el.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Network Development <netdev@...r.kernel.org>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH bpf-next 1/5] libbpf: support XDP_SHARED_UMEM with
 external XDP program

On Fri, Nov 8, 2019 at 7:43 PM William Tu <u9012063@...il.com> wrote:
>
> On Fri, Nov 08, 2019 at 07:19:18PM +0100, Magnus Karlsson wrote:
> > On Fri, Nov 8, 2019 at 7:03 PM William Tu <u9012063@...il.com> wrote:
> > >
> > > Hi Magnus,
> > >
> > > Thanks for the patch.
> > >
> > > On Thu, Nov 07, 2019 at 06:47:36PM +0100, Magnus Karlsson wrote:
> > > > Add support in libbpf to create multiple sockets that share a single
> > > > umem. Note that an external XDP program need to be supplied that
> > > > routes the incoming traffic to the desired sockets. So you need to
> > > > supply the libbpf_flag XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD and load
> > > > your own XDP program.
> > > >
> > > > Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
> > > > ---
> > > >  tools/lib/bpf/xsk.c | 27 +++++++++++++++++----------
> > > >  1 file changed, 17 insertions(+), 10 deletions(-)
> > > >
> > > > diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> > > > index 86c1b61..8ebd810 100644
> > > > --- a/tools/lib/bpf/xsk.c
> > > > +++ b/tools/lib/bpf/xsk.c
> > > > @@ -586,15 +586,21 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> > > >       if (!umem || !xsk_ptr || !rx || !tx)
> > > >               return -EFAULT;
> > > >
> > > > -     if (umem->refcount) {
> > > > -             pr_warn("Error: shared umems not supported by libbpf.\n");
> > > > -             return -EBUSY;
> > > > -     }
> > > > -
> > > >       xsk = calloc(1, sizeof(*xsk));
> > > >       if (!xsk)
> > > >               return -ENOMEM;
> > > >
> > > > +     err = xsk_set_xdp_socket_config(&xsk->config, usr_config);
> > > > +     if (err)
> > > > +             goto out_xsk_alloc;
> > > > +
> > > > +     if (umem->refcount &&
> > > > +         !(xsk->config.libbpf_flags & XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD)) {
> > > > +             pr_warn("Error: shared umems not supported by libbpf supplied XDP program.\n");
> > >
> > > Why can't we use the existing default one in libbpf?
> > > If users don't want to redistribute packet to different queue,
> > > then they can still use the libbpf default one.
> >
> > Is there any point in creating two or more sockets tied to the same
> > umem and directing all traffic to just one socket? IMHO, I believe
>
> When using build-in XDP, isn't the traffic being directed to its
> own xsk on its queue? (so not just one xsk socket)
>
> So using build-in XDP, for example, queue1/xsk1 and queue2/xsk2, and
> sharing one umem. Both xsk1 and xsk2 receive packets from their queue.

WIth the XDP_SHARED_UMEM flag this is not allowed. In your example,
queue1/xsk1 and queue1/xsk2 would be allowed. All sockets need to be
tied to the same queue id if they share a umem. In this case an XDP
program has to decide how to distribute the packets since they all
arrive on the same queue.

If you want queue1/xsk1 and queue2/xsk2 you need separate umems since
it would otherwise violate the SPSC requirement or the rings. Or
implement MPSC and SPMC queues to be used in this configuration.

> > that most users in this case would want to distribute the packets over
> > the sockets in some way. I also think that users might be unpleasantly
> > surprised if they create multiple sockets and all packets only get to
> > a single socket because libbpf loaded an XDP program that makes little
> > sense in the XDP_SHARED_UMEM case. If we force them to supply an XDP
>
> Do I misunderstand the code?
> I looked at xsk_setup_xdp_prog, xsk_load_xdp_prog, and xsk_set_bpf_maps.
> The build-in prog will distribute packets to different xsk sockets,
> not a single socket.

True, but only for the case above (queue1/xsk1 and queue2/xsk2) where
they have separate umems. For the queue1/xsk1 and queue1/xsk2 case, it
would send everything to xsk1.

/Magnus

> > program, they need to make this decision. I also wanted to extend the
> > sample with an explicit user loaded XDP program as an example of how
> > to do this. What do you think?
>
> Yes, I like it. Like previous version having the xdpsock_kern.c as an
> example for people to follow.
>
> William
>
> >
> > /Magnus
> >
> > > William
> > > > +             err = -EBUSY;
> > > > +             goto out_xsk_alloc;
> > > > +     }
> > > > +
> > > >       if (umem->refcount++ > 0) {
> > > >               xsk->fd = socket(AF_XDP, SOCK_RAW, 0);
> > > >               if (xsk->fd < 0) {
> > > > @@ -616,10 +622,6 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> > > >       memcpy(xsk->ifname, ifname, IFNAMSIZ - 1);
> > > >       xsk->ifname[IFNAMSIZ - 1] = '\0';
> > > >
> > > > -     err = xsk_set_xdp_socket_config(&xsk->config, usr_config);
> > > > -     if (err)
> > > > -             goto out_socket;
> > > > -
> > > >       if (rx) {
> > > >               err = setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING,
> > > >                                &xsk->config.rx_size,
> > > > @@ -687,7 +689,12 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> > > >       sxdp.sxdp_family = PF_XDP;
> > > >       sxdp.sxdp_ifindex = xsk->ifindex;
> > > >       sxdp.sxdp_queue_id = xsk->queue_id;
> > > > -     sxdp.sxdp_flags = xsk->config.bind_flags;
> > > > +     if (umem->refcount > 1) {
> > > > +             sxdp.sxdp_flags = XDP_SHARED_UMEM;
> > > > +             sxdp.sxdp_shared_umem_fd = umem->fd;
> > > > +     } else {
> > > > +             sxdp.sxdp_flags = xsk->config.bind_flags;
> > > > +     }
> > > >
> > > >       err = bind(xsk->fd, (struct sockaddr *)&sxdp, sizeof(sxdp));
> > > >       if (err) {
> > > > --
> > > > 2.7.4
> > > >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ