[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8295446.YZpkE7ns4p@avalon>
Date: Tue, 28 Oct 2014 17:12:14 +0200
From: Laurent Pinchart <laurent.pinchart@...asonboard.com>
To: Laura Abbott <lauraa@...eaurora.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-sh@...r.kernel.org, Michal Nazarewicz <mina86@...a86.com>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
Minchan Kim <minchan@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: CMA: test_pages_isolated failures in alloc_contig_range
Hi Laura,
On Monday 27 October 2014 13:38:19 Laura Abbott wrote:
> On 10/26/2014 2:09 PM, Laurent Pinchart wrote:
> > Hello,
> >
> > I've run into a CMA-related issue while testing a DMA engine driver with
> > dmatest on a Renesas R-Car ARM platform.
> >
> > When allocating contiguous memory through CMA the kernel prints the
> > following messages to the kernel log.
> >
> > [ 99.770000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> > [ 124.220000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> > [ 127.550000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> > [ 132.850000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> > [ 151.390000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> > [ 166.490000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> > [ 181.450000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> >
> > I've stripped the dmatest module down as much as possible to remove any
> > hardware dependencies and came up with the following implementation.
>
> ...
>
> > Loading the module will start 4 threads that will allocate and free DMA
> > coherent memory in a tight loop and eventually produce the error. It seems
> > like the probability of occurrence grows with the number of threads, which
> > could indicate a race condition.
> >
> > The tests have been run on 3.18-rc1, but previous tests on 3.16 did
> > exhibit the same behaviour.
> >
> > I'm not that familiar with the CMA internals, help would be appreciated to
> > debug the problem.
>
> Are you actually seeing allocation failures or is it just the messages?
It's just the messages, I haven't noticed allocation failures.
> The messages themselves may be harmless if the allocation is succeeding.
> It's an indication that the particular range could not be isolated and
> therefore another range should be used for the CMA allocation. Joonsoo
> Kim had a patch series[1] that was designed to correct some problems with
> isolation and from my testing it helps fix some CMA related errors. You
> might try picking that up to see if it helps.
>
> Thanks,
> Laura
>
> [1] https://lkml.org/lkml/2014/10/23/90
I've tested the patches but they don't seem to have any influence on the
isolation test failures.
--
Regards,
Laurent Pinchart
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists