lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <85aaefdf-d454-1823-5840-d9e2f71ffb19@oracle.com>
Date:   Wed, 7 Aug 2019 15:56:07 +0800
From:   Jacob Wen <jian.w.wen@...cle.com>
To:     Firo Yang <firo.yang@...e.com>,
        "davem@...emloft.net" <davem@...emloft.net>
Cc:     "alexander.h.duyck@...ux.intel.com" 
        <alexander.h.duyck@...ux.intel.com>,
        "jeffrey.t.kirsher@...el.com" <jeffrey.t.kirsher@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 1/1] ixgbe: sync the first fragment unconditionally

I think the description is not correct. Consider using something like below.


In Xen environment, due to memory fragmentation ixgbe may allocate a 
'DMA' buffer with pages that are not physically contiguous.

A NIC doesn't support directly write such buffer. So xen-swiotlb would 
use the pages, which are physically contiguous, from the swiotlb buffer 
for the NIC.

The unmap operation is used to copy the swiotlb buffer to the pages that 
are allocated by ixgbe.

On 8/7/19 10:49 AM, Firo Yang wrote:
> In Xen environment, if Xen-swiotlb is enabled, ixgbe driver
> could possibly allocate a page, DMA memory buffer, for the first
> fragment which is not suitable for Xen-swiotlb to do DMA operations.
> Xen-swiotlb have to internally allocate another page for doing DMA
> operations. It requires syncing between those two pages. However,
> since commit f3213d932173 ("ixgbe: Update driver to make use of DMA
> attributes in Rx path"), the unmap operation is performed with
> DMA_ATTR_SKIP_CPU_SYNC. As a result, the sync is not performed.
>
> To fix this problem, always sync before possibly performing a page
> unmap operation.
>
> Fixes: f3213d932173 ("ixgbe: Update driver to make use of DMA
> attributes in Rx path")
> Reviewed-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
> Signed-off-by: Firo Yang <firo.yang@...e.com>
> ---
>
> Changes from v1:
>   * Imporved the patch description.
>   * Added Reviewed-by: and Fixes: as suggested by Alexander Duyck
>
>   drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 16 +++++++++-------
>   1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> index cbaf712d6529..200de9838096 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> @@ -1825,13 +1825,7 @@ static void ixgbe_pull_tail(struct ixgbe_ring *rx_ring,
>   static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
>   				struct sk_buff *skb)
>   {
> -	/* if the page was released unmap it, else just sync our portion */
> -	if (unlikely(IXGBE_CB(skb)->page_released)) {
> -		dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma,
> -				     ixgbe_rx_pg_size(rx_ring),
> -				     DMA_FROM_DEVICE,
> -				     IXGBE_RX_DMA_ATTR);
> -	} else if (ring_uses_build_skb(rx_ring)) {
> +	if (ring_uses_build_skb(rx_ring)) {
>   		unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
>   
>   		dma_sync_single_range_for_cpu(rx_ring->dev,
> @@ -1848,6 +1842,14 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
>   					      skb_frag_size(frag),
>   					      DMA_FROM_DEVICE);
>   	}
> +
> +	/* If the page was released, just unmap it. */
> +	if (unlikely(IXGBE_CB(skb)->page_released)) {
> +		dma_unmap_page_attrs(rx_ring->dev, IXGBE_CB(skb)->dma,
> +				     ixgbe_rx_pg_size(rx_ring),
> +				     DMA_FROM_DEVICE,
> +				     IXGBE_RX_DMA_ATTR);
> +	}
>   }
>   
>   /**

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ