lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 17 Oct 2023 18:37:07 +0200
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: Larysa Zaremba <larysa.zaremba@...el.com>, bpf@...r.kernel.org, ast@...nel.org, 
	daniel@...earbox.net, andrii@...nel.org, martin.lau@...ux.dev, 
	song@...nel.org, yhs@...com, john.fastabend@...il.com, kpsingh@...nel.org, 
	sdf@...gle.com, haoluo@...gle.com, jolsa@...nel.org, 
	David Ahern <dsahern@...il.com>, Jakub Kicinski <kuba@...nel.org>, 
	Willem de Bruijn <willemb@...gle.com>, Jesper Dangaard Brouer <hawk@...nel.org>, 
	Anatoly Burakov <anatoly.burakov@...el.com>, Alexander Lobakin <alexandr.lobakin@...el.com>, 
	Maryam Tahhan <mtahhan@...hat.com>, xdp-hints@...-project.net, netdev@...r.kernel.org, 
	Willem de Bruijn <willemdebruijn.kernel@...il.com>, 
	Alexei Starovoitov <alexei.starovoitov@...il.com>, Simon Horman <simon.horman@...igine.com>, 
	Tariq Toukan <tariqt@...lanox.com>, Saeed Mahameed <saeedm@...lanox.com>, magnus.karlsson@...el.com
Subject: Re: [PATCH bpf-next v6 07/18] ice: Support XDP hints in AF_XDP ZC mode

On Tue, 17 Oct 2023 at 18:13, Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> On Thu, Oct 12, 2023 at 07:05:13PM +0200, Larysa Zaremba wrote:
> > In AF_XDP ZC, xdp_buff is not stored on ring,
> > instead it is provided by xsk_buff_pool.
> > Space for metadata sources right after such buffers was already reserved
> > in commit 94ecc5ca4dbf ("xsk: Add cb area to struct xdp_buff_xsk").
> > This makes the implementation rather straightforward.
> >
> > Update AF_XDP ZC packet processing to support XDP hints.
> >
> > Signed-off-by: Larysa Zaremba <larysa.zaremba@...el.com>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_xsk.c | 34 ++++++++++++++++++++++--
> >  1 file changed, 32 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > index ef778b8e6d1b..6ca620b2fbdd 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > @@ -752,22 +752,51 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp,
> >       return ICE_XDP_CONSUMED;
> >  }
> >
> > +/**
> > + * ice_prepare_pkt_ctx_zc - Prepare packet context for XDP hints
> > + * @xdp: xdp_buff used as input to the XDP program
> > + * @eop_desc: End of packet descriptor
> > + * @rx_ring: Rx ring with packet context
> > + *
> > + * In regular XDP, xdp_buff is placed inside the ring structure,
> > + * just before the packet context, so the latter can be accessed
> > + * with xdp_buff address only at all times, but in ZC mode,
> > + * xdp_buffs come from the pool, so we need to reinitialize
> > + * context for every packet.
> > + *
> > + * We can safely convert xdp_buff_xsk to ice_xdp_buff,
> > + * because there are XSK_PRIV_MAX bytes reserved in xdp_buff_xsk
> > + * right after xdp_buff, for our private use.
> > + * XSK_CHECK_PRIV_TYPE() ensures we do not go above the limit.
> > + */
> > +static void ice_prepare_pkt_ctx_zc(struct xdp_buff *xdp,
> > +                                union ice_32b_rx_flex_desc *eop_desc,
> > +                                struct ice_rx_ring *rx_ring)
> > +{
> > +     XSK_CHECK_PRIV_TYPE(struct ice_xdp_buff);
> > +     ((struct ice_xdp_buff *)xdp)->pkt_ctx = rx_ring->pkt_ctx;
>
> I will be loud thinking over here, but this could be set in
> ice_fill_rx_descs(), while grabbing xdp_buffs from xsk_pool, should
> minimize the performance overhead.
>
> But then again you address that with static branch in later patch.
>
> OTOH, I was thinking that we could come with xsk_buff_pool API that would
> let drivers assign this at setup time. Similar what is being done with dma
> mappings.
>
> Magnus, do you think it is worth the hassle? Thoughts?

I would measure the overhead of the current assignment and if it is
significant (incurs a cache miss for example), then why not try out
your idea. Usually good not to have to touch things when not needed.

> Or should we advise any other driver that support hints to mimic static
> branch solution?
>
> > +     ice_xdp_meta_set_desc(xdp, eop_desc);
> > +}
> > +
> >  /**
> >   * ice_run_xdp_zc - Executes an XDP program in zero-copy path
> >   * @rx_ring: Rx ring
> >   * @xdp: xdp_buff used as input to the XDP program
> >   * @xdp_prog: XDP program to run
> >   * @xdp_ring: ring to be used for XDP_TX action
> > + * @rx_desc: packet descriptor
> >   *
> >   * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
> >   */
> >  static int
> >  ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
> > -            struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring)
> > +            struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
> > +            union ice_32b_rx_flex_desc *rx_desc)
> >  {
> >       int err, result = ICE_XDP_PASS;
> >       u32 act;
> >
> > +     ice_prepare_pkt_ctx_zc(xdp, rx_desc, rx_ring);
> >       act = bpf_prog_run_xdp(xdp_prog, xdp);
> >
> >       if (likely(act == XDP_REDIRECT)) {
> > @@ -907,7 +936,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
> >               if (ice_is_non_eop(rx_ring, rx_desc))
> >                       continue;
> >
> > -             xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring);
> > +             xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring,
> > +                                      rx_desc);
> >               if (likely(xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))) {
> >                       xdp_xmit |= xdp_res;
> >               } else if (xdp_res == ICE_XDP_EXIT) {
> > --
> > 2.41.0
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ