[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <DA6901612C71B84D91459DE817C418AE26E0015A@XAP-PVEXMBX01.xlnx.xilinx.com>
Date: Tue, 25 Oct 2016 05:52:49 +0000
From: Naga Sureshkumar Relli <naga.sureshkumar.relli@...inx.com>
To: Christoph Hellwig <hch@...radead.org>,
Richard Weinberger <richard@....at>
CC: "dwmw2@...radead.org" <dwmw2@...radead.org>,
"computersforpeace@...il.com" <computersforpeace@...il.com>,
"dedekind1@...il.com" <dedekind1@...il.com>,
"adrian.hunter@...el.com" <adrian.hunter@...el.com>,
"michal.simek@...inx.com" <michal.simek@...inx.com>,
"Punnaiah Choudary Kalluri" <punnaia@...inx.com>,
"linux-mtd@...ts.infradead.org" <linux-mtd@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Boris Brezillon <boris.brezillon@...e-electrons.com>
Subject: RE: UBIFS with dma on 4.6 kernel is not working
Hi,
Thanks everybody for your valuable information.
I am not aware of all these dma related APIs but where to handle these dma stuff?
Is it in UBI/UBIFS(at the time of vmalloc allocations)? Or in controller driver?
And also is there a way to know the memory allocated using vmalloc is contiguous or not?
Based on that I can switch my driver to work in dma or non-dma mode for ubifs use.
Thanks,
Naga Sureshkumar Relli
-----Original Message-----
From: Christoph Hellwig [mailto:hch@...radead.org]
Sent: Friday, October 21, 2016 6:45 PM
To: Richard Weinberger <richard@....at>
Cc: Christoph Hellwig <hch@...radead.org>; Naga Sureshkumar Relli <nagasure@...inx.com>; dwmw2@...radead.org; computersforpeace@...il.com; dedekind1@...il.com; adrian.hunter@...el.com; michal.simek@...inx.com; Punnaiah Choudary Kalluri <punnaia@...inx.com>; linux-mtd@...ts.infradead.org; linux-kernel@...r.kernel.org; Boris Brezillon <boris.brezillon@...e-electrons.com>
Subject: Re: UBIFS with dma on 4.6 kernel is not working
On Fri, Oct 21, 2016 at 03:07:57PM +0200, Richard Weinberger wrote:
> Hmm, thought this is still problematic on VIVT architectures.
> Boris tried to provide a solution for that some time ago:
> http://www.spinics.net/lists/arm-kernel/msg494025.html
Things have been working fine for approx 10 years when using flush_kernel_vmap_range before doing I/O using the physical addresses and then invalidate_kernel_vmap_range when completing the I/O and going back to using the virtual mapping for XFS.
Of course all this assumes at least the higher level that did the vm_map_ram operation knows about this dance between virtually mapped and physiscal addresses.
Powered by blists - more mailing lists