lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Oct 2016 13:11:01 -0400 (EDT)
From:   David Miller <davem@...emloft.net>
To:     john.fastabend@...il.com
Cc:     alexander.duyck@...il.com, mst@...hat.com, brouer@...hat.com,
        shrijeet@...il.com, tom@...bertland.com, netdev@...r.kernel.org,
        shm@...ulusnetworks.com, roopa@...ulusnetworks.com,
        nikolay@...ulusnetworks.com
Subject: Re: [PATCH net-next RFC WIP] Patch for XDP support for virtio_net

From: John Fastabend <john.fastabend@...il.com>
Date: Fri, 28 Oct 2016 08:56:35 -0700

> On 16-10-27 07:10 PM, David Miller wrote:
>> From: Alexander Duyck <alexander.duyck@...il.com>
>> Date: Thu, 27 Oct 2016 18:43:59 -0700
>> 
>>> On Thu, Oct 27, 2016 at 6:35 PM, David Miller <davem@...emloft.net> wrote:
>>>> From: "Michael S. Tsirkin" <mst@...hat.com>
>>>> Date: Fri, 28 Oct 2016 01:25:48 +0300
>>>>
>>>>> On Thu, Oct 27, 2016 at 05:42:18PM -0400, David Miller wrote:
>>>>>> From: "Michael S. Tsirkin" <mst@...hat.com>
>>>>>> Date: Fri, 28 Oct 2016 00:30:35 +0300
>>>>>>
>>>>>>> Something I'd like to understand is how does XDP address the
>>>>>>> problem that 100Byte packets are consuming 4K of memory now.
>>>>>>
>>>>>> Via page pools.  We're going to make a generic one, but right now
>>>>>> each and every driver implements a quick list of pages to allocate
>>>>>> from (and thus avoid the DMA man/unmap overhead, etc.)
>>>>>
>>>>> So to clarify, ATM virtio doesn't attempt to avoid dma map/unmap
>>>>> so there should be no issue with that even when using sub/page
>>>>> regions, assuming DMA APIs support sub-page map/unmap correctly.
>>>>
>>>> That's not what I said.
>>>>
>>>> The page pools are meant to address the performance degradation from
>>>> going to having one packet per page for the sake of XDP's
>>>> requirements.
>>>>
>>>> You still need to have one packet per page for correct XDP operation
>>>> whether you do page pools or not, and whether you have DMA mapping
>>>> (or it's equivalent virutalization operation) or not.
>>>
>>> Maybe I am missing something here, but why do you need to limit things
>>> to one packet per page for correct XDP operation?  Most of the drivers
>>> out there now are usually storing something closer to at least 2
>>> packets per page, and with the DMA API fixes I am working on there
>>> should be no issue with changing the contents inside those pages since
>>> we won't invalidate or overwrite the data after the DMA buffer has
>>> been synchronized for use by the CPU.
>> 
>> Because with SKB's you can share the page with other packets.
>> 
>> With XDP you simply cannot.
>> 
>> It's software semantics that are the issue.  SKB frag list pages
>> are read only, XDP packets are writable.
>> 
>> This has nothing to do with "writability" of the pages wrt. DMA
>> mapping or cpu mappings.
>> 
> 
> Sorry I'm not seeing it either. The current xdp_buff is defined
> by,
> 
>   struct xdp_buff {
> 	void *data;
> 	void *data_end;
>   };
> 
> The verifier has an xdp_is_valid_access() check to ensure we don't go
> past data_end. The page for now at least never leaves the driver. For
> the work to get xmit to other devices working I'm still not sure I see
> any issue.

I guess I can say that the packets must be "writable" until I'm blue
in the face but I'll say it again, semantically writable pages are a
requirement.  And if multiple packets share a page this requirement
is not satisfied.

Also, we want to do several things in the future:

1) Allow push/pop of headers via eBPF code, which needs we need
   headroom.

2) Transparently zero-copy pass packets into userspace, basically
   the user will have a semi-permanently mapped ring of all the
   packet pages sitting in the RX queue of the device and the
   page pool associated with it.  This way we avoid all of the
   TLB flush/map overhead for the user's mapping of the packets
   just as we avoid the DMA map/unmap overhead.

And that's just the beginninng.

I'm sure others can come up with more reasons why we have this
requirement.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ