lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Oct 2014 13:38:19 -0700
From:	Laura Abbott <lauraa@...eaurora.org>
To:	Laurent Pinchart <laurent.pinchart@...asonboard.com>,
	linux-mm@...ck.org
CC:	linux-kernel@...r.kernel.org, linux-sh@...r.kernel.org,
	Michal Nazarewicz <mina86@...a86.com>,
	Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
	Minchan Kim <minchan@...nel.org>
Subject: Re: CMA: test_pages_isolated failures in alloc_contig_range

On 10/26/2014 2:09 PM, Laurent Pinchart wrote:
> Hello,
>
> I've run into a CMA-related issue while testing a DMA engine driver with
> dmatest on a Renesas R-Car ARM platform.
>
> When allocating contiguous memory through CMA the kernel prints the following
> messages to the kernel log.
>
> [   99.770000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [  124.220000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [  127.550000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> [  132.850000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> [  151.390000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [  166.490000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [  181.450000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
>
> I've stripped the dmatest module down as much as possible to remove any
> hardware dependencies and came up with the following implementation.
>
...
>
> Loading the module will start 4 threads that will allocate and free DMA
> coherent memory in a tight loop and eventually produce the error. It seems
> like the probability of occurrence grows with the number of threads, which
> could indicate a race condition.
>
> The tests have been run on 3.18-rc1, but previous tests on 3.16 did exhibit
> the same behaviour.
>
> I'm not that familiar with the CMA internals, help would be appreciated to
> debug the problem.
>

Are you actually seeing allocation failures or is it just the messages?
The messages themselves may be harmless if the allocation is succeeding.
It's an indication that the particular range could not be isolated and
therefore another range should be used for the CMA allocation. Joonsoo
Kim had a patch series[1] that was designed to correct some problems with
isolation and from my testing it helps fix some CMA related errors. You
might try picking that up to see if it helps.

Thanks,
Laura

[1] https://lkml.org/lkml/2014/10/23/90

-- 
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ