lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 27 Oct 2016 18:43:59 -0700
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     David Miller <davem@...emloft.net>
Cc:     "Michael S. Tsirkin" <mst@...hat.com>,
        John Fastabend <john.fastabend@...il.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        shrijeet@...il.com, Tom Herbert <tom@...bertland.com>,
        Netdev <netdev@...r.kernel.org>,
        Shrijeet Mukherjee <shm@...ulusnetworks.com>,
        roopa <roopa@...ulusnetworks.com>, nikolay@...ulusnetworks.com
Subject: Re: [PATCH net-next RFC WIP] Patch for XDP support for virtio_net

On Thu, Oct 27, 2016 at 6:35 PM, David Miller <davem@...emloft.net> wrote:
> From: "Michael S. Tsirkin" <mst@...hat.com>
> Date: Fri, 28 Oct 2016 01:25:48 +0300
>
>> On Thu, Oct 27, 2016 at 05:42:18PM -0400, David Miller wrote:
>>> From: "Michael S. Tsirkin" <mst@...hat.com>
>>> Date: Fri, 28 Oct 2016 00:30:35 +0300
>>>
>>> > Something I'd like to understand is how does XDP address the
>>> > problem that 100Byte packets are consuming 4K of memory now.
>>>
>>> Via page pools.  We're going to make a generic one, but right now
>>> each and every driver implements a quick list of pages to allocate
>>> from (and thus avoid the DMA man/unmap overhead, etc.)
>>
>> So to clarify, ATM virtio doesn't attempt to avoid dma map/unmap
>> so there should be no issue with that even when using sub/page
>> regions, assuming DMA APIs support sub-page map/unmap correctly.
>
> That's not what I said.
>
> The page pools are meant to address the performance degradation from
> going to having one packet per page for the sake of XDP's
> requirements.
>
> You still need to have one packet per page for correct XDP operation
> whether you do page pools or not, and whether you have DMA mapping
> (or it's equivalent virutalization operation) or not.

Maybe I am missing something here, but why do you need to limit things
to one packet per page for correct XDP operation?  Most of the drivers
out there now are usually storing something closer to at least 2
packets per page, and with the DMA API fixes I am working on there
should be no issue with changing the contents inside those pages since
we won't invalidate or overwrite the data after the DMA buffer has
been synchronized for use by the CPU.

Powered by blists - more mailing lists