[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191220.213532.2095474595045639925.davem@davemloft.net>
Date: Fri, 20 Dec 2019 21:35:32 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: madalin.bucur@....com, madalin.bucur@....nxp.com
Cc: netdev@...r.kernel.org
Subject: Re: [PATCH] dpaa_eth: fix DMA mapping leak
From: Madalin Bucur <madalin.bucur@....nxp.com>
Date: Thu, 19 Dec 2019 16:08:48 +0200
> @@ -1744,6 +1744,9 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
> count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
> dma_unmap_page(priv->rx_dma_dev, sg_addr,
> DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE);
> +
> + j++; /* fragments up to j were DMA unmapped */
> +
You can move this code:
/* We may use multiple Rx pools */
dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
if (!dpaa_bp)
goto free_buffers;
count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
after the dma_unmap_page() call and that is such a much simpler
way to fix this bug.
Powered by blists - more mailing lists