[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3de2ca29-120b-b7a6-da55-47fe5e056f73@nod.at>
Date: Mon, 24 Oct 2016 09:08:13 +0200
From: Richard Weinberger <richard@....at>
To: Christoph Hellwig <hch@...radead.org>
Cc: Naga Sureshkumar Relli <naga.sureshkumar.relli@...inx.com>,
"dwmw2@...radead.org" <dwmw2@...radead.org>,
"computersforpeace@...il.com" <computersforpeace@...il.com>,
"dedekind1@...il.com" <dedekind1@...il.com>,
"adrian.hunter@...el.com" <adrian.hunter@...el.com>,
"michal.simek@...inx.com" <michal.simek@...inx.com>,
Punnaiah Choudary Kalluri <punnaia@...inx.com>,
"linux-mtd@...ts.infradead.org" <linux-mtd@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Boris Brezillon <boris.brezillon@...e-electrons.com>
Subject: Re: UBIFS with dma on 4.6 kernel is not working
Christoph,
On 21.10.2016 15:15, Christoph Hellwig wrote:
> On Fri, Oct 21, 2016 at 03:07:57PM +0200, Richard Weinberger wrote:
>> Hmm, thought this is still problematic on VIVT architectures.
>> Boris tried to provide a solution for that some time ago:
>> http://www.spinics.net/lists/arm-kernel/msg494025.html
>
> Things have been working fine for approx 10 years when using
> flush_kernel_vmap_range before doing I/O using the physical addresses and
> then invalidate_kernel_vmap_range when completing the I/O and going back
> to using the virtual mapping for XFS.
>
> Of course all this assumes at least the higher level that did the
> vm_map_ram operation knows about this dance between virtually mapped and
> physiscal addresses.
Good to know, I was clearly wrong.
Let's see whether the costs of flush_kernel_vmap_range and invalidate_kernel_vmap_range
are smaller than the speedup by DMA on embedded platforms.
We'll have to test it.
Thanks,
//richard
Powered by blists - more mailing lists