lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <92138f5d3c8548f4ad9637cd0736139d@IL-EXCH01.marvell.com>
Date:   Sat, 22 Sep 2018 07:07:57 +0000
From:   Yelena Krivosheev <yelena@...vell.com>
To:     Gregory CLEMENT <gregory.clement@...tlin.com>,
        Antoine Tenart <antoine.tenart@...tlin.com>
CC:     "davem@...emloft.net" <davem@...emloft.net>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "thomas.petazzoni@...tlin.com" <thomas.petazzoni@...tlin.com>,
        "maxime.chevallier@...tlin.com" <maxime.chevallier@...tlin.com>,
        "miquel.raynal@...tlin.com" <miquel.raynal@...tlin.com>,
        Nadav Haklai <nadavh@...vell.com>,
        "Stefan Chulski" <stefanc@...vell.com>,
        Yan Markman <ymarkman@...vell.com>,
        "mw@...ihalf.com" <mw@...ihalf.com>
Subject: RE: [EXT] [PATCH net] net: mvneta: fix the Rx desc buffer DMA
 unmapping

Hi Gregory.

I want to clarify static mvneta_rxq_drop_pkts():

static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
				 struct mvneta_rx_queue *rxq)
{
	int rx_done, i;

	rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq);
	if (rx_done)
		mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);

	if (pp->bm_priv) { <---------------------------- this is case for HWBM
		for (i = 0; i < rx_done; i++) {
			struct mvneta_rx_desc *rx_desc =
						  mvneta_rxq_next_desc_get(rxq);
			u8 pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc);
			struct mvneta_bm_pool *bm_pool;

			bm_pool = &pp->bm_priv->bm_pools[pool_id];
			/* Return dropped buffer to the pool */
			mvneta_bm_pool_put_bp(pp->bm_priv, bm_pool,
					      rx_desc->buf_phys_addr);
		}
		return;
	}

<-------- this is case for SWBM only
	for (i = 0; i < rxq->size; i++) {
		struct mvneta_rx_desc *rx_desc = rxq->descs + i;
		void *data = rxq->buf_virt_addr[i];

		if (!data || !(rx_desc->buf_phys_addr))
			continue;
		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
		__free_page(data);
	}
}

So I suggest to fix dma_unmap_single() call too.

Thanks.
Yelena


-----Original Message-----
From: Gregory CLEMENT [mailto:gregory.clement@...tlin.com] 
Sent: Thursday, September 20, 2018 6:00 PM
To: Antoine Tenart <antoine.tenart@...tlin.com>
Cc: Yelena Krivosheev <yelena@...vell.com>; davem@...emloft.net; netdev@...r.kernel.org; linux-kernel@...r.kernel.org; thomas.petazzoni@...tlin.com; maxime.chevallier@...tlin.com; miquel.raynal@...tlin.com; Nadav Haklai <nadavh@...vell.com>; Stefan Chulski <stefanc@...vell.com>; Yan Markman <ymarkman@...vell.com>; mw@...ihalf.com
Subject: Re: [EXT] [PATCH net] net: mvneta: fix the Rx desc buffer DMA unmapping

Hi Antoine,
 
 On jeu., sept. 20 2018, Antoine Tenart <antoine.tenart@...tlin.com> wrote:

> Hi Yelena,
>
> On Thu, Sep 20, 2018 at 10:14:56AM +0000, Yelena Krivosheev wrote:
>> 
>> Please, check and fix all cases of dma_unmap_single() usage.
>> See mvneta_rxq_drop_pkts()
>> ...
>> 		if (!data || !(rx_desc->buf_phys_addr))
>> 			continue;
>> 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
>> 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
>> 		__free_page(data);
>> ...
>
> I had a look at the one reported by CONFIG_DMA_API_DEBUG, and at DMA 
> unmapping calls using PAGE_SIZE. As you pointed out there might be 
> others parts, thanks!

Actually Jisheng had submitted a similar patch few weeks ago and as I pointed at this time, the dma_unmap in mvneta_rxq_drop_pkts can be called when the allocation is done in with HWBM in this case which use a dma_map_single.

I though that in this case using dma_map_single is the things to do even if in the SWBM case it is less optimal.

Gregory

>
> Antoine
>
> --
> Antoine Ténart, Bootlin
> Embedded Linux and Kernel engineering
> https://bootlin.com

--
Gregory Clement, Bootlin
Embedded Linux and Kernel engineering
http://bootlin.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ