lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170823102937.79a9c4ed@redhat.com>
Date:   Wed, 23 Aug 2017 10:29:37 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Michael Chan <michael.chan@...adcom.com>
Cc:     Alexander Duyck <alexander.duyck@...il.com>,
        "Duyck, Alexander H" <alexander.h.duyck@...el.com>,
        "john.fastabend@...il.com" <john.fastabend@...il.com>,
        "pstaszewski@...are.pl" <pstaszewski@...are.pl>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "xdp-newbies@...r.kernel.org" <xdp-newbies@...r.kernel.org>,
        "andy@...yhouse.net" <andy@...yhouse.net>,
        "borkmann@...earbox.net" <borkmann@...earbox.net>,
        brouer@...hat.com
Subject: Re: XDP redirect measurements, gotchas and tracepoints

On Tue, 22 Aug 2017 23:59:05 -0700
Michael Chan <michael.chan@...adcom.com> wrote:

> On Tue, Aug 22, 2017 at 6:06 PM, Alexander Duyck
> <alexander.duyck@...il.com> wrote:
> > On Tue, Aug 22, 2017 at 1:04 PM, Michael Chan <michael.chan@...adcom.com> wrote:  
> >>
> >> Right, but it's conceivable to add an API to "return" the buffer to
> >> the input device, right?

Yes, I would really like to see an API like this.

> >
> > You could, it is just added complexity. "just free the buffer" in
> > ixgbe usually just amounts to one atomic operation to decrement the
> > total page count since page recycling is already implemented in the
> > driver. You still would have to unmap the buffer regardless of if you
> > were recycling it or not so all you would save is 1.000015259 atomic
> > operations per packet. The fraction is because once every 64K uses we
> > have to bulk update the count on the page.
> >  
> 
> If the buffer is returned to the input device, the input device can
> keep the DMA mapping.  All it needs to do is to dma_sync it back to
> the input device when the buffer is returned.

Yes, exactly, return to the input device. I really think we should
work on a solution where we can keep the DMA mapping around.  We have
an opportunity here to make ndo_xdp_xmit TX queues use a specialized
page return call, to achieve this. (I imagine other arch's have a high
DMA overhead than Intel)

I'm not sure how the API should look.  The ixgbe recycle mechanism and
splitting the page (into two packets) actually complicates things, and
tie us into a page-refcnt based model.  We could get around this by
each driver implementing a page-return-callback, that allow us to
return the page to the input device?  Then, drivers implementing the
1-packet-per-page can simply check/read the page-refcnt, and if it is
"1" DMA-sync and reuse it in the RX queue.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ