lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 1 Oct 2018 12:56:58 +0300
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     netdev@...r.kernel.org, jaswinder.singh@...aro.org,
        ard.biesheuvel@...aro.org, masami.hiramatsu@...aro.org,
        arnd@...db.de, bjorn.topel@...el.com, magnus.karlsson@...el.com,
        daniel@...earbox.net, ast@...nel.org,
        jesus.sanchez-palencia@...el.com, vinicius.gomes@...el.com,
        makita.toshiaki@....ntt.co.jp, Tariq Toukan <tariqt@...lanox.com>,
        Tariq Toukan <ttoukan.linux@...il.com>
Subject: Re: [net-next, PATCH 1/2, v3] net: socionext: different approach on
 DMA

> > #2: You have allocations on the XDP fast-path.
> > 
> > The REAL secret behind the XDP performance is to avoid allocations on
> > the fast-path.  While I just told you to use the page-allocator and
> > order-0 pages, this will actually kill performance.  Thus, to make this
> > fast, you need a driver local recycle scheme that avoids going through
> > the page allocator, which makes XDP_DROP and XDP_TX extremely fast.
> > For the XDP_REDIRECT action (which you seems to be interested in, as
> > this is needed for AF_XDP), there is a xdp_return_frame() API that can
> > make this fast.
> I had an initial implementation that did exactly that (that's why you the
> dma_sync_single_for_cpu() -> dma_unmap_single_attrs() is there). In the case 
> of AF_XDP isn't that introducing a 'bottleneck' though? I mean you'll feed fresh
> buffers back to the hardware only when your packets have been processed from
> your userspace application
Just a clarification here. This is the case if ZC is implemented. In my case
the buffers will be 'ok' to be passed back to the hardware once the use
userspace payload has been copied by xdp_do_redirect()

/Ilias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ