lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Apr 2018 10:08:23 +0200
From:   Magnus Karlsson <magnus.karlsson@...il.com>
To:     Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc:     "Michael S. Tsirkin" <mst@...hat.com>,
        Björn Töpel <bjorn.topel@...il.com>,
        "Karlsson, Magnus" <magnus.karlsson@...el.com>,
        Alexander Duyck <alexander.h.duyck@...el.com>,
        Alexander Duyck <alexander.duyck@...il.com>,
        John Fastabend <john.fastabend@...il.com>,
        Alexei Starovoitov <ast@...com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Network Development <netdev@...r.kernel.org>,
        michael.lundkvist@...csson.com,
        "Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
        "Singhai, Anjali" <anjali.singhai@...el.com>,
        "Zhang, Qi Z" <qi.z.zhang@...el.com>
Subject: Re: [PATCH bpf-next 03/15] xsk: add umem fill queue support and mmap

On Tue, Apr 24, 2018 at 1:59 AM, Willem de Bruijn
<willemdebruijn.kernel@...il.com> wrote:
> On Mon, Apr 23, 2018 at 7:21 PM, Michael S. Tsirkin <mst@...hat.com> wrote:
>> On Mon, Apr 23, 2018 at 03:56:07PM +0200, Björn Töpel wrote:
>>> From: Magnus Karlsson <magnus.karlsson@...el.com>
>>>
>>> Here, we add another setsockopt for registered user memory (umem)
>>> called XDP_UMEM_FILL_QUEUE. Using this socket option, the process can
>>> ask the kernel to allocate a queue (ring buffer) and also mmap it
>>> (XDP_UMEM_PGOFF_FILL_QUEUE) into the process.
>>>
>>> The queue is used to explicitly pass ownership of umem frames from the
>>> user process to the kernel. These frames will in a later patch be
>>> filled in with Rx packet data by the kernel.
>>>
>>> Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
>>> ---
>>>  include/uapi/linux/if_xdp.h | 15 +++++++++++
>>>  net/xdp/Makefile            |  2 +-
>>>  net/xdp/xdp_umem.c          |  5 ++++
>>>  net/xdp/xdp_umem.h          |  2 ++
>>>  net/xdp/xsk.c               | 62 ++++++++++++++++++++++++++++++++++++++++++++-
>>>  net/xdp/xsk_queue.c         | 58 ++++++++++++++++++++++++++++++++++++++++++
>>>  net/xdp/xsk_queue.h         | 38 +++++++++++++++++++++++++++
>>>  7 files changed, 180 insertions(+), 2 deletions(-)
>>>  create mode 100644 net/xdp/xsk_queue.c
>>>  create mode 100644 net/xdp/xsk_queue.h
>>>
>>> diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h
>>> index 41252135a0fe..975661e1baca 100644
>>> --- a/include/uapi/linux/if_xdp.h
>>> +++ b/include/uapi/linux/if_xdp.h
>>> @@ -23,6 +23,7 @@
>>>
>>>  /* XDP socket options */
>>>  #define XDP_UMEM_REG                 3
>>> +#define XDP_UMEM_FILL_RING           4
>>>
>>>  struct xdp_umem_reg {
>>>       __u64 addr; /* Start of packet data area */
>>> @@ -31,4 +32,18 @@ struct xdp_umem_reg {
>>>       __u32 frame_headroom; /* Frame head room */
>>>  };
>>>
>>> +/* Pgoff for mmaping the rings */
>>> +#define XDP_UMEM_PGOFF_FILL_RING     0x100000000
>>> +
>>> +struct xdp_ring {
>>> +     __u32 producer __attribute__((aligned(64)));
>>> +     __u32 consumer __attribute__((aligned(64)));
>>> +};
>>
>> Why 64? And do you still need these guys in uapi?
>
> I was just about to ask the same. You mean cacheline_aligned?

Yes, I would like to have these cache aligned. How can I accomplish
this in a uapi?
I put a note around this in the cover letter:

* How to deal with cache alignment for uapi when different
  architectures can have different cache line sizes? We have just
  aligned it to 64 bytes for now, which works for many popular
  architectures, but not all. Please advise.

>
>>> +static int xsk_mmap(struct file *file, struct socket *sock,
>>> +                 struct vm_area_struct *vma)
>>> +{
>>> +     unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
>>> +     unsigned long size = vma->vm_end - vma->vm_start;
>>> +     struct xdp_sock *xs = xdp_sk(sock->sk);
>>> +     struct xsk_queue *q;
>>> +     unsigned long pfn;
>>> +     struct page *qpg;
>>> +
>>> +     if (!xs->umem)
>>> +             return -EINVAL;
>>> +
>>> +     if (offset == XDP_UMEM_PGOFF_FILL_RING)
>>> +             q = xs->umem->fq;
>>> +     else
>>> +             return -EINVAL;
>>> +
>>> +     qpg = virt_to_head_page(q->ring);
>
> Is it assured that q is initialized with a call to setsockopt
> XDP_UMEM_FILL_RING before the call the mmap?

Unfortunately not, so this is a bug. Case in point for running
syzkaller below, definitely.

> In general, with such an extensive new API, it might be worthwhile to
> run syzkaller locally on a kernel with these patches. It is pretty
> easy to set up (https://github.com/google/syzkaller/blob/master/docs/linux/setup.md),
> though it also needs to be taught about any new APIs.

Good idea. Will set this up and have it torture the API.

Thanks: Magnus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ