lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200820075403.GB2282@lore-desk>
Date:   Thu, 20 Aug 2020 09:54:03 +0200
From:   Lorenzo Bianconi <lorenzo@...nel.org>
To:     John Fastabend <john.fastabend@...il.com>
Cc:     Jakub Kicinski <kuba@...nel.org>,
        Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
        netdev@...r.kernel.org, bpf@...r.kernel.org, davem@...emloft.net,
        brouer@...hat.com, echaudro@...hat.com, sameehj@...zon.com
Subject: Re: [PATCH net-next 6/6] net: mvneta: enable jumbo frames for XDP

> Jakub Kicinski wrote:
> > On Wed, 19 Aug 2020 22:22:23 +0200 Lorenzo Bianconi wrote:
> > > > On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:  
> > > > > Enable the capability to receive jumbo frames even if the interface is
> > > > > running in XDP mode
> > > > > 
> > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>  
> > > > 
> > > > Hm, already? Is all the infra in place? Or does it not imply
> > > > multi-buffer.
> > > 
> > > with this series mvneta supports xdp multi-buff on both rx and tx sides (XDP_TX
> > > and ndo_xpd_xmit()) so we can remove MTU limitation.
> > 
> > Is there an API for programs to access the multi-buf frames?
> 
> Hi Lorenzo,

Hi Jakub and John,

> 
> This is not enough to support multi-buffer in my opinion. I have the
> same comment as Jakub. We need an API to pull in the multiple
> buffers otherwise we break the ability to parse the packets and that
> is a hard requirement to me. I don't want to lose visibility to get
> jumbo frames.

I have not been so clear in the commit message, sorry for that.
This series aims to finalize xdp multi-buff support for mvneta driver only.
Our plan is to work on the helpers/metadata in subsequent series since
driver support is quite orthogonal. If you think we need the helpers
in place before removing the mtu constraint, we could just drop last
patch (6/6) and apply patches from 1/6 to 5/6 since they are preliminary
to remove the mtu constraint. Do you agree?

> 
> At minimum we need a bpf_xdp_pull_data() to adjust pointer. In the
> skmsg case we use this,
> 
>   bpf_msg_pull_data(u32 start, u32 end, u64 flags)
> 
> Where start is the offset into the packet and end is the last byte we
> want to adjust start/end pointers to. This way we can walk pages if
> we want and avoid having to linearize the data unless the user actual
> asks us for a block that crosses a page range. Smart users then never
> do a start/end that crosses a page boundary if possible. I think the
> same would apply here.
> 
> XDP by default gives you the first page start/end to use freely. If
> you need to parse deeper into the payload then you call bpf_msg_pull_data
> with the byte offsets needed.

Our first proposal is described here [0][1]. In particular, we are assuming the
eBPF layer can access just the first fragment in the non-linear XDP buff and
we will provide some non-linear xdp metadata (e.g. # of segments in the xdp_buffer
or buffer total length) to the eBPF program attached to the interface.
Anyway IMHO this mvneta series is not strictly related to this approach.

Regards,
Lorenzo

[0] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
[1] http://people.redhat.com/lbiancon/conference/NetDevConf2020-0x14/add-xdp-on-driver.html (XDP multi-buffers section)

> 
> Also we would want performance numbers to see how good/bad this is
> compared to the base case.
> 
> Thanks,
> John

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ