[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160323111155.GB2057@leverpostej>
Date: Wed, 23 Mar 2016 11:11:55 +0000
From: Mark Rutland <mark.rutland@....com>
To: Laura Abbott <labbott@...hat.com>
Cc: Chen Feng <puck.chen@...ilicon.com>, catalin.marinas@....com,
akpm@...ux-foundation.org, mhocko@...e.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, xuyiping@...ilicon.com,
suzhuangluan@...ilicon.com, saberlily.xia@...ilicon.com,
dan.zhao@...ilicon.com, linux-arm-kernel@...ts.infradead.org
Subject: Re: Delete flush cache all in arm64 platform.
On Mon, Mar 21, 2016 at 08:58:02AM -0700, Laura Abbott wrote:
> On 03/21/2016 03:08 AM, Mark Rutland wrote:
> >On Mon, Mar 21, 2016 at 04:07:47PM +0800, Chen Feng wrote:
> >>But if we use VA to flush cache to do cache-coherency with other
> >>master(eg:gpu)
> >>
> >>We must iterate over the sg-list to flush by va to pa.
> >>
> >>In this way, the iterate of sg-list may cost too much time(sg-table to
> >>sg-list) if the sglist is too long. Take a look at the
> >>ion_pages_sync_for_device in ion.
> >>
> >>The driver(eg: ION) need to use this interface(flush cache all) to
> >>*improve the efficiency*.
> >I'm not sure what to suggest regarding improving efficiency.
> >
> >Is walking the sglist the expensive portion, or is the problem the cost
> >of multiple page-size operations (each with their own barriers)?
>
> Last time I looked at this, it was mostly the multiple page-size operations.
We may be able to amortize some of that cost if we had non-synchronised
cache maintenance operations for each page, then followed that up with a
single final DSB SY.
There are several places in arch/arm64/mm/dma-mapping.c (practically
every use of for_each_sg) that could potentially benefit. I'm not sure
how much that's likely to gain as it will depend heavily on the
microarchitecture.
Regardless, it looks like that would require ion_pages_sync_for_device
and friends to be reworked, as it seems to only hand single pages down
to the architecture backend rather than a more complete sglist.
Thanks,
Mark.
Powered by blists - more mailing lists