lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100507092357.GA19936@n2100.arm.linux.org.uk>
Date:	Fri, 7 May 2010 10:23:57 +0100
From:	Russell King - ARM Linux <linux@....linux.org.uk>
To:	Linus Walleij <linus.walleij@...ricsson.com>
Cc:	Dan Williams <dan.j.williams@...el.com>, linux-mmc@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 5/7] ARM: add PrimeCell generic DMA to PL011 v6

On Mon, May 03, 2010 at 02:55:13AM +0200, Linus Walleij wrote:
> +	/* Map DMA buffers */
> +	sglen = dma_map_sg(uap->port.dev, &dmarx->scatter_a,
> +			   1, DMA_FROM_DEVICE);
> +	if (sglen != 1)
> +		goto err_rx_sgmap_a;
> +
> +	sglen = dma_map_sg(uap->port.dev, &dmarx->scatter_b,
> +			   1, DMA_FROM_DEVICE);
> +	if (sglen != 1)
> +		goto err_rx_sgmap_b;
> +
> +	sglen = dma_map_sg(uap->port.dev, &dmatx->scatter,
> +			   1, DMA_TO_DEVICE);
> +	if (sglen != 1)
> +		goto err_tx_sgmap;


So as soon as we allocate these, we hand them over to DMA device ownership...

> +	/* Else proceed to copy the TX chars to the DMA buffer and fire DMA */
> +	count = uart_circ_chars_pending(xmit);
> +	if (count > PL011_DMA_BUFFER_SIZE)
> +		count = PL011_DMA_BUFFER_SIZE;
> +
> +	if (xmit->tail < xmit->head)
> +		memcpy(&dmatx->tx_dma_buf[0], &xmit->buf[xmit->tail], count);
> +	else {
> +		size_t first = UART_XMIT_SIZE - xmit->tail;
> +		size_t second = xmit->head;
> +
> +		memcpy(&dmatx->tx_dma_buf[0], &xmit->buf[xmit->tail], first);
> +		memcpy(&dmatx->tx_dma_buf[first], &xmit->buf[0], second);
> +	}

But here we write to the buffers without switching them to CPU ownership.
Only one device of {CPU, DMA} actively owns the DMA buffer at any one time,
and only the active owner is permited under DMA API rules to access that
buffer.

Consider the situation where you've written to the first half of a cache
line, but the DMA device has yet to read from the second half of that
cache line - the result is a corrupted transfer.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ