[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <VI1PR04MB5567A2BC5EDF345F6A8AB59CEC2E0@VI1PR04MB5567.eurprd04.prod.outlook.com>
Date: Mon, 23 Dec 2019 06:50:03 +0000
From: "Madalin Bucur (OSS)" <madalin.bucur@....nxp.com>
To: David Miller <davem@...emloft.net>,
"Madalin Bucur (OSS)" <madalin.bucur@....nxp.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [PATCH] dpaa_eth: fix DMA mapping leak
> -----Original Message-----
> From: David Miller <davem@...emloft.net>
> Sent: Saturday, December 21, 2019 7:36 AM
> To: Madalin Bucur <madalin.bucur@....com>; Madalin Bucur (OSS)
> <madalin.bucur@....nxp.com>
> Cc: netdev@...r.kernel.org
> Subject: Re: [PATCH] dpaa_eth: fix DMA mapping leak
>
> From: Madalin Bucur <madalin.bucur@....nxp.com>
> Date: Thu, 19 Dec 2019 16:08:48 +0200
>
> > @@ -1744,6 +1744,9 @@ static struct sk_buff *sg_fd_to_skb(const struct
> dpaa_priv *priv,
> > count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
> > dma_unmap_page(priv->rx_dma_dev, sg_addr,
> > DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE);
> > +
> > + j++; /* fragments up to j were DMA unmapped */
> > +
>
> You can move this code:
>
> /* We may use multiple Rx pools */
> dpaa_bp = dpaa_bpid2pool(sgt[i].bpid);
> if (!dpaa_bp)
> goto free_buffers;
>
> count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
>
> after the dma_unmap_page() call and that is such a much simpler
> way to fix this bug.
Thank you, that will yield a simpler cleanup path.
Powered by blists - more mailing lists