lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Apr 2023 02:43:32 -0400
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Cc:     netdev@...r.kernel.org,
        Björn Töpel <bjorn@...nel.org>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>, bpf@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        Jason Wang <jasowang@...hat.com>,
        Guenter Roeck <linux@...ck-us.net>,
        Gerd Hoffmann <kraxel@...hat.com>,
        Christoph Hellwig <hch@...radead.org>
Subject: Re: [PATCH net-next] xsk: introduce xsk_dma_ops

On Mon, Apr 17, 2023 at 11:27:50AM +0800, Xuan Zhuo wrote:
> @@ -532,9 +545,9 @@ struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool)
>  	xskb->xdp.data_meta = xskb->xdp.data;
>  
>  	if (pool->dma_need_sync) {
> -		dma_sync_single_range_for_device(pool->dev, xskb->dma, 0,
> -						 pool->frame_len,
> -						 DMA_BIDIRECTIONAL);
> +		pool->dma_ops.sync_single_range_for_device(pool->dev, xskb->dma, 0,
> +							   pool->frame_len,
> +							   DMA_BIDIRECTIONAL);
>  	}
>  	return &xskb->xdp;
>  }
> @@ -670,15 +683,15 @@ EXPORT_SYMBOL(xp_raw_get_dma);
>  
>  void xp_dma_sync_for_cpu_slow(struct xdp_buff_xsk *xskb)
>  {
> -	dma_sync_single_range_for_cpu(xskb->pool->dev, xskb->dma, 0,
> -				      xskb->pool->frame_len, DMA_BIDIRECTIONAL);
> +	xskb->pool->dma_ops.sync_single_range_for_cpu(xskb->pool->dev, xskb->dma, 0,
> +						      xskb->pool->frame_len, DMA_BIDIRECTIONAL);
>  }
>  EXPORT_SYMBOL(xp_dma_sync_for_cpu_slow);
>  
>  void xp_dma_sync_for_device_slow(struct xsk_buff_pool *pool, dma_addr_t dma,
>  				 size_t size)
>  {
> -	dma_sync_single_range_for_device(pool->dev, dma, 0,
> -					 size, DMA_BIDIRECTIONAL);
> +	pool->dma_ops.sync_single_range_for_device(pool->dev, dma, 0,
> +						   size, DMA_BIDIRECTIONAL);
>  }
>  EXPORT_SYMBOL(xp_dma_sync_for_device_slow);

So you add an indirect function call on data path? Won't this be costly?

> -- 
> 2.32.0.3.g01195cf9f

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ