[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <519a83bf-1244-5151-b873-bb8e1f2db3c6@nod.at>
Date: Fri, 21 Oct 2016 11:29:16 +0200
From: Richard Weinberger <richard@....at>
To: Naga Sureshkumar Relli <naga.sureshkumar.relli@...inx.com>,
"dwmw2@...radead.org" <dwmw2@...radead.org>,
"computersforpeace@...il.com" <computersforpeace@...il.com>,
"dedekind1@...il.com" <dedekind1@...il.com>,
"adrian.hunter@...el.com" <adrian.hunter@...el.com>,
"michal.simek@...inx.com" <michal.simek@...inx.com>,
Punnaiah Choudary Kalluri <punnaia@...inx.com>
Cc: "linux-mtd@...ts.infradead.org" <linux-mtd@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: UBIFS with dma on 4.6 kernel is not working
Hi!
On 21.10.2016 11:21, Naga Sureshkumar Relli wrote:
> Hi,
>
> This is regarding UBIFS on 4.6 kernel.
> We have tested UBIFS on our ZynqMP SOC QSPI Controller, the UBIFS is not working with dma on this kernel.
> Controller driver: https://github.com/torvalds/linux/commits/master/drivers/spi/spi-zynqmp-gqspi.c
> If I replace all vmalloc allocations in fs/ubifs/ to kmalloc then UBIFS with dma is working fine.
No, it will sooner or later OOM. Both UBI and UBIFS need rather large buffers, that's why we have to use
vmalloc().
> But whereas kernel before 4.6 without changing vmalloc to kmalloc, UBIFS is working fine with dma.
> So is there any change in UBIFS in 4.6 kernel that to dma related?
I'm not aware of such one.
Do you see this with vanilla kernels? Maybe some other internal stuff has changed.
git bisect can help.
DMA to vmalloced memory not good, it may work by chance if you transfer less than PAGE_SIZE.
Especially on ARM.
> May I know some info regarding this?
> Why UBIFS on kernels before 4.6 is working with dma but not with 4.6?
> Now a days, most of QSPI controllers have internal dmas.
>
> Could you please provide some info regrading this dma issue?
> We can change our controller driver to operate in IO mode (doesn't use dma) but performance wise it's not a preferred one.
Most MTD drivers use a bounce buffer.
How much does your performance degrade?
Thanks,
//richard
Powered by blists - more mailing lists