lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 23 Nov 2022 11:14:31 -0800
From:   Jakub Kicinski <kuba@...nel.org>
To:     Stanislav Fomichev <sdf@...gle.com>
Cc:     Toke Høiland-Jørgensen <toke@...hat.com>,
        bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
        andrii@...nel.org, martin.lau@...ux.dev, song@...nel.org,
        yhs@...com, john.fastabend@...il.com, kpsingh@...nel.org,
        haoluo@...gle.com, jolsa@...nel.org,
        Tariq Toukan <tariqt@...dia.com>,
        David Ahern <dsahern@...il.com>,
        Willem de Bruijn <willemb@...gle.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Anatoly Burakov <anatoly.burakov@...el.com>,
        Alexander Lobakin <alexandr.lobakin@...el.com>,
        Magnus Karlsson <magnus.karlsson@...il.com>,
        Maryam Tahhan <mtahhan@...hat.com>, xdp-hints@...-project.net,
        netdev@...r.kernel.org
Subject: Re: [xdp-hints] [PATCH bpf-next v2 6/8] mlx4: Introduce
 mlx4_xdp_buff wrapper for xdp_buff

On Wed, 23 Nov 2022 10:26:41 -0800 Stanislav Fomichev wrote:
> > This embedding trick works for drivers that put xdp_buff on the stack,
> > but mlx5 supports XSK zerocopy, which uses the xsk_buff_pool for
> > allocating them. This makes it a bit awkward to do the same thing there;
> > and since it's probably going to be fairly common to do something like
> > this, how about we just add a 'void *drv_priv' pointer to struct
> > xdp_buff that the drivers can use? The xdp_buff already takes up a full
> > cache line anyway, so any data stuffed after it will spill over to a new
> > one; so I don't think there's much difference performance-wise.  
> 
> I guess the alternative is to extend xsk_buff_pool with some new
> argument for xdp_buff tailroom? (so it can kmalloc(sizeof(xdp_buff) +
> xdp_buff_tailroom))
> But it seems messy because there is no way of knowing what the target
> device's tailroom is, so it has to be a user setting :-/
> I've started with a priv pointer in xdp_buff initially, it seems fine
> to go back. I'll probably convert veth/mlx4 to the same mode as well
> to avoid having different approaches in different places..

Can we not do this please? Add 16B of "private driver space" after
the xdp_buff in xdp_buff_xsk (we have 16B to full cacheline), the
drivers decide how they use it. Drivers can do BUILD_BUG_ON() for their
expected size and cast that to whatever struct they want. This is how
various offloads work, the variable size tailroom would be an over
design IMO.

And this way non XSK paths can keep its normal typing.

> > I'll send my patch to add support to mlx5 (using the drv_priv pointer
> > approach) separately.  
> 
> Saw them, thanks! Will include them in v3+.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ