lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180814003253.fkgl6lyklc7fclvq@ast-mbp>
Date:   Mon, 13 Aug 2018 17:32:55 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        ast@...nel.org, daniel@...earbox.net, jbrouer@...hat.com,
        mst@...hat.com
Subject: Re: [RFC PATCH net-next V2 0/6] XDP rx handler

On Mon, Aug 13, 2018 at 11:17:24AM +0800, Jason Wang wrote:
> Hi:
> 
> This series tries to implement XDP support for rx hanlder. This would
> be useful for doing native XDP on stacked device like macvlan, bridge
> or even bond.
> 
> The idea is simple, let stacked device register a XDP rx handler. And
> when driver return XDP_PASS, it will call a new helper xdp_do_pass()
> which will try to pass XDP buff to XDP rx handler directly. XDP rx
> handler may then decide how to proceed, it could consume the buff, ask
> driver to drop the packet or ask the driver to fallback to normal skb
> path.
> 
> A sample XDP rx handler was implemented for macvlan. And virtio-net
> (mergeable buffer case) was converted to call xdp_do_pass() as an
> example. For ease comparision, generic XDP support for rx handler was
> also implemented.
> 
> Compared to skb mode XDP on macvlan, native XDP on macvlan (XDP_DROP)
> shows about 83% improvement.

I'm missing the motiviation for this.
It seems performance of such solution is ~1M packet per second.
What would be a real life use case for such feature ?

Another concern is that XDP users expect to get line rate performance
and native XDP delivers it. 'generic XDP' is a fallback only
mechanism to operate on NICs that don't have native XDP yet.
Toshiaki's veth XDP work fits XDP philosophy and allows
high speed networking to be done inside containers after veth.
It's trying to get to line rate inside container.
This XDP rx handler stuff is destined to stay at 1Mpps speeds forever
and the users will get confused with forever slow modes of XDP.

Please explain the problem you're trying to solve.
"look, here I can to XDP on top of macvlan" is not an explanation of the problem.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ