lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 16 Apr 2020 14:04:30 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Tariq Toukan <ttoukan.linux@...il.com>
Cc:     sameehj@...zon.com, Saeed Mahameed <saeedm@...lanox.com>,
        netdev@...r.kernel.org, bpf@...r.kernel.org, zorik@...zon.com,
        akiyano@...zon.com, gtzalik@...zon.com,
        Toke Høiland-Jørgensen 
        <toke@...hat.com>, Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        John Fastabend <john.fastabend@...il.com>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
        David Ahern <dsahern@...il.com>,
        Willem de Bruijn <willemdebruijn.kernel@...il.com>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        Lorenzo Bianconi <lorenzo@...nel.org>, brouer@...hat.com
Subject: Re: [PATCH RFC v2 17/33] mlx5: rx queue setup time determine
 frame_sz for XDP

On Wed, 8 Apr 2020 15:52:26 +0300
Tariq Toukan <ttoukan.linux@...il.com> wrote:

> Hi Jesper,
> 
> Thanks for your patch.
> Please see feedback below.
> 
> On 4/8/2020 2:52 PM, Jesper Dangaard Brouer wrote:
> > The mlx5 driver have multiple memory models, which are also changed
> > according to whether a XDP bpf_prog is attached.
> > 
> > The 'rx_striding_rq' setting is adjusted via ethtool priv-flags e.g.:
> >   # ethtool --set-priv-flags mlx5p2 rx_striding_rq off
> > 
> > On the general case with 4K page_size and regular MTU packet, then
> > the frame_sz is 2048 and 4096 when XDP is enabled, in both modes.
> > 
> > The info on the given frame size is stored differently depending on the
> > RQ-mode and encoded in a union in struct mlx5e_rq union wqe/mpwqe.
> > In rx striding mode rq->mpwqe.log_stride_sz is either 11 or 12, which
> > corresponds to 2048 or 4096 (MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ).
> > In non-striding mode (MLX5_WQ_TYPE_CYCLIC) the frag_stride is stored
> > in rq->wqe.info.arr[0].frag_stride.  
> 
> Just to clarify, the above description is true as long as we're in the 
> Linear SKB memory scheme, this holds when:
> 1) MTU + headroom + tailroom < PAGE_SIZE, and
> 2) HW LRO is OFF.
> 
> Otherwise, mpwqe.log_stride_sz can be smaller, and frag_stride of 
> wqe_info can vary from one index to another.
> 
> > 
> > To reduce effect on fast-path, this patch determine the frame_sz at
> > setup time, to avoid determining the memory model runtime.
> > 
> > Cc: Tariq Toukan <tariqt@...lanox.com>
> > Cc: Saeed Mahameed <saeedm@...lanox.com>
> > Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
> > ---
> >   drivers/net/ethernet/mellanox/mlx5/core/en.h      |    1 +
> >   drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c  |    1 +
> >   drivers/net/ethernet/mellanox/mlx5/core/en_main.c |    4 ++++
> >   3 files changed, 6 insertions(+)
> > 
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > index 12a61bf82c14..1f280fc142ca 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > @@ -651,6 +651,7 @@ struct mlx5e_rq {
> >   	struct {
> >   		u16            umem_headroom;
> >   		u16            headroom;
> > +		u32            frame_sz;
> >   		u8             map_dir;   /* dma map direction */
> >   	} buff;
> >   
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > index f049e0ac308a..de4ad2c9f49a 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > @@ -137,6 +137,7 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
> >   	if (xsk)
> >   		xdp.handle = di->xsk.handle;
> >   	xdp.rxq = &rq->xdp_rxq;
> > +	xdp.frame_sz = rq->buff.frame_sz;
> >   
> >   	act = bpf_prog_run_xdp(prog, &xdp);
> >   	if (xsk) {
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > index dd7f338425eb..b9595315c45b 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > @@ -462,6 +462,8 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
> >   		rq->mpwqe.num_strides =
> >   			BIT(mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk));
> >   
> > +		rq->buff.frame_sz = (1 << rq->mpwqe.log_stride_sz);
> > +  
> 
> This is always correct.
> 
> >   		err = mlx5e_create_rq_umr_mkey(mdev, rq);
> >   		if (err)
> >   			goto err_rq_wq_destroy;
> > @@ -485,6 +487,8 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
> >   			num_xsk_frames = wq_sz << rq->wqe.info.log_num_frags;
> >   
> >   		rq->wqe.info = rqp->frags_info;
> > +		rq->buff.frame_sz = rq->wqe.info.arr[0].frag_stride;
> > +  
> 
> This is not always correct.
> Size of the last frag for a large MTU might be a full page.
> See:
> https://elixir.bootlin.com/linux/latest/source/drivers/net/ethernet/mellanox/mlx5/core/en_main.c#L2097
> 
> However, you won't try to use this value at all in the non-linear SKB 
> flow, as it's not compatible with XDP.

Yes, exactly.

> Anyway, I prefer this value to be always true. No matter if it's really 
> used or not.
> Probably rename the field name to indicate this?
> Something like: single_frame_sz / first_frame_sz ?

Okay, I've renamed the field name to "first_frame_sz".  As this field
only describe the size of the first fragment.  This is fits with what
we are currently planning, to only give XDP/eBPF access to the first
fragment in case of multi-buffer XDP. (And then use Daniels idea of a
BPF-helper to pull in more data if explicitly requested).

Still trying to figure out if this is correct for AF_XDP.

And trying if I can get it more correct for non-linear case,
even-though it is not really used in that case.
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ