lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 3 Feb 2014 21:14:37 +0900
From:	Akinobu Mita <akinobu.mita@...il.com>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc:	Marek Szyprowski <m.szyprowski@...sung.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	David Woodhouse <dwmw2@...radead.org>,
	Don Dutile <ddutile@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, Andi Kleen <andi@...stfloor.org>,
	x86@...nel.org, iommu@...ts.linux-foundation.org
Subject: Re: [PATCH v2 1/5] x86: make dma_alloc_coherent() return zeroed
 memory if CMA is enabled

2014-01-29 Akinobu Mita <akinobu.mita@...il.com>:
> 2014-01-28 Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>:
>> On Mon, Jan 27, 2014 at 02:54:47PM +0100, Marek Szyprowski wrote:
>>> Hello,
>>>
>>> On 2014-01-14 15:13, Akinobu Mita wrote:
>>> >Calling dma_alloc_coherent() with __GFP_ZERO must return zeroed memory.
>>> >
>>> >But when the contiguous memory allocator (CMA) is enabled on x86 and
>>> >the memory region is allocated by dma_alloc_from_contiguous(), it
>>> >doesn't return zeroed memory.  Because dma_generic_alloc_coherent()
>>> >forgot to fill the memory region with zero if it was allocated by
>>> >dma_alloc_from_contiguous()
>>>
>>> I just wonder how it will work with high mem? I've didn't check the x86
>>> dma mapping code yet, but page_address() works only for pages, which comes
>>> from low memory. In other patches you have added an option to place CMA
>>> area anywhere in the memory. Is the x86 pci dma code ready for the case
>>> when cma area is put into high mem and direct mappings are not available?
>>
>> Yes and no. The swiotbl_bounce does have the code to take that into account.
>> But that is it - nothing else does - so I think you would run in the
>> possiblity of 'page_address' not providing an correct virtual address.
>
> Thanks for spotting the issue.  I haven't much tested on x86_32.
> I'll go through it and try to find the solution.

I have confirmed that locating CMA on highmem range with a
'cma=size@...rt-end' kernel parameter by this patch set caused the
issue on x86_32.

This can be fixed by limiting CMA area upto max_low_pfn to prevent
from locating it on highmem at arch/x86/kernel/setup.c:setup_arch()

-       dma_contiguous_reserve(0);
+       dma_contiguous_reserve(max_low_pfn << PAGE_SHIFT);

I'm going to inlcude this change in this patch set.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ