lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Sep 2022 15:34:19 +0200
From:   Magnus Karlsson <magnus.karlsson@...il.com>
To:     Jalal Mostafa <jalal.a.mostapha@...il.com>
Cc:     netdev@...r.kernel.org, bpf@...r.kernel.org, bjorn@...nel.org,
        magnus.karlsson@...el.com, maciej.fijalkowski@...el.com,
        jonathan.lemon@...il.com, davem@...emloft.net, edumazet@...gle.com,
        kuba@...nel.org, pabeni@...hat.com, daniel@...earbox.net,
        linux-kernel@...r.kernel.org, jalal.mostafa@....edu
Subject: Re: [PATCH bpf v2] xsk: inherit need_wakeup flag for shared sockets

On Tue, Sep 20, 2022 at 1:58 PM Jalal Mostafa
<jalal.a.mostapha@...il.com> wrote:
>
> The flag for need_wakeup is not set for xsks with `XDP_SHARED_UMEM`
> flag and of different queue ids and/or devices. They should inherit
> the flag from the first socket buffer pool since no flags can be
> specified once `XDP_SHARED_UMEM` is specified. The issue is fixed
> by creating a new function `xp_create_and_assign_umem_shared` to
> create xsk_buff_pool for xsks with the shared umem flag set.

Thanks!

Acked-by: Magnus Karlsson <magnus.karlsson@...el.com>

> Fixes: b5aea28dca134 ("xsk: Add shared umem support between queue ids")
> Signed-off-by: Jalal Mostafa <jalal.a.mostapha@...il.com>
> ---
>  include/net/xsk_buff_pool.h | 2 +-
>  net/xdp/xsk.c               | 4 ++--
>  net/xdp/xsk_buff_pool.c     | 5 +++--
>  3 files changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
> index 647722e847b4..f787c3f524b0 100644
> --- a/include/net/xsk_buff_pool.h
> +++ b/include/net/xsk_buff_pool.h
> @@ -95,7 +95,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
>                                                 struct xdp_umem *umem);
>  int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *dev,
>                   u16 queue_id, u16 flags);
> -int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
> +int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_sock *umem_xs,
>                          struct net_device *dev, u16 queue_id);
>  int xp_alloc_tx_descs(struct xsk_buff_pool *pool, struct xdp_sock *xs);
>  void xp_destroy(struct xsk_buff_pool *pool);
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 5b4ce6ba1bc7..7bada4e8460b 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -954,8 +954,8 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
>                                 goto out_unlock;
>                         }
>
> -                       err = xp_assign_dev_shared(xs->pool, umem_xs->umem,
> -                                                  dev, qid);
> +                       err = xp_assign_dev_shared(xs->pool, umem_xs, dev,
> +                                                  qid);
>                         if (err) {
>                                 xp_destroy(xs->pool);
>                                 xs->pool = NULL;
> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index a71a8c6edf55..ed6c71826d31 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -212,17 +212,18 @@ int xp_assign_dev(struct xsk_buff_pool *pool,
>         return err;
>  }
>
> -int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
> +int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_sock *umem_xs,
>                          struct net_device *dev, u16 queue_id)
>  {
>         u16 flags;
> +       struct xdp_umem *umem = umem_xs->umem;
>
>         /* One fill and completion ring required for each queue id. */
>         if (!pool->fq || !pool->cq)
>                 return -EINVAL;
>
>         flags = umem->zc ? XDP_ZEROCOPY : XDP_COPY;
> -       if (pool->uses_need_wakeup)
> +       if (umem_xs->pool->uses_need_wakeup)
>                 flags |= XDP_USE_NEED_WAKEUP;
>
>         return xp_assign_dev(pool, dev, queue_id, flags);
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ