[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-+m5+5sKvo2Z1YOOX+zFKNYLVFqjq6+b4wpP6dTX=cyEA@mail.gmail.com>
Date: Mon, 23 Apr 2018 19:59:00 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Björn Töpel <bjorn.topel@...il.com>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
Alexander Duyck <alexander.duyck@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Alexei Starovoitov <ast@...com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Daniel Borkmann <daniel@...earbox.net>,
Network Development <netdev@...r.kernel.org>,
michael.lundkvist@...csson.com,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
"Singhai, Anjali" <anjali.singhai@...el.com>,
"Zhang, Qi Z" <qi.z.zhang@...el.com>
Subject: Re: [PATCH bpf-next 03/15] xsk: add umem fill queue support and mmap
On Mon, Apr 23, 2018 at 7:21 PM, Michael S. Tsirkin <mst@...hat.com> wrote:
> On Mon, Apr 23, 2018 at 03:56:07PM +0200, Björn Töpel wrote:
>> From: Magnus Karlsson <magnus.karlsson@...el.com>
>>
>> Here, we add another setsockopt for registered user memory (umem)
>> called XDP_UMEM_FILL_QUEUE. Using this socket option, the process can
>> ask the kernel to allocate a queue (ring buffer) and also mmap it
>> (XDP_UMEM_PGOFF_FILL_QUEUE) into the process.
>>
>> The queue is used to explicitly pass ownership of umem frames from the
>> user process to the kernel. These frames will in a later patch be
>> filled in with Rx packet data by the kernel.
>>
>> Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
>> ---
>> include/uapi/linux/if_xdp.h | 15 +++++++++++
>> net/xdp/Makefile | 2 +-
>> net/xdp/xdp_umem.c | 5 ++++
>> net/xdp/xdp_umem.h | 2 ++
>> net/xdp/xsk.c | 62 ++++++++++++++++++++++++++++++++++++++++++++-
>> net/xdp/xsk_queue.c | 58 ++++++++++++++++++++++++++++++++++++++++++
>> net/xdp/xsk_queue.h | 38 +++++++++++++++++++++++++++
>> 7 files changed, 180 insertions(+), 2 deletions(-)
>> create mode 100644 net/xdp/xsk_queue.c
>> create mode 100644 net/xdp/xsk_queue.h
>>
>> diff --git a/include/uapi/linux/if_xdp.h b/include/uapi/linux/if_xdp.h
>> index 41252135a0fe..975661e1baca 100644
>> --- a/include/uapi/linux/if_xdp.h
>> +++ b/include/uapi/linux/if_xdp.h
>> @@ -23,6 +23,7 @@
>>
>> /* XDP socket options */
>> #define XDP_UMEM_REG 3
>> +#define XDP_UMEM_FILL_RING 4
>>
>> struct xdp_umem_reg {
>> __u64 addr; /* Start of packet data area */
>> @@ -31,4 +32,18 @@ struct xdp_umem_reg {
>> __u32 frame_headroom; /* Frame head room */
>> };
>>
>> +/* Pgoff for mmaping the rings */
>> +#define XDP_UMEM_PGOFF_FILL_RING 0x100000000
>> +
>> +struct xdp_ring {
>> + __u32 producer __attribute__((aligned(64)));
>> + __u32 consumer __attribute__((aligned(64)));
>> +};
>
> Why 64? And do you still need these guys in uapi?
I was just about to ask the same. You mean cacheline_aligned?
>> +static int xsk_mmap(struct file *file, struct socket *sock,
>> + struct vm_area_struct *vma)
>> +{
>> + unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
>> + unsigned long size = vma->vm_end - vma->vm_start;
>> + struct xdp_sock *xs = xdp_sk(sock->sk);
>> + struct xsk_queue *q;
>> + unsigned long pfn;
>> + struct page *qpg;
>> +
>> + if (!xs->umem)
>> + return -EINVAL;
>> +
>> + if (offset == XDP_UMEM_PGOFF_FILL_RING)
>> + q = xs->umem->fq;
>> + else
>> + return -EINVAL;
>> +
>> + qpg = virt_to_head_page(q->ring);
Is it assured that q is initialized with a call to setsockopt
XDP_UMEM_FILL_RING before the call the mmap?
In general, with such an extensive new API, it might be worthwhile to
run syzkaller locally on a kernel with these patches. It is pretty
easy to set up (https://github.com/google/syzkaller/blob/master/docs/linux/setup.md),
though it also needs to be taught about any new APIs.
Powered by blists - more mailing lists