[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5e83b0c0-f1c3-399f-f4f5-afb92af7d7ae@atmel.com>
Date: Thu, 2 Mar 2017 17:45:41 +0100
From: Cyrille Pitchen <cyrille.pitchen@...el.com>
To: Boris Brezillon <boris.brezillon@...e-electrons.com>,
Vignesh R <vigneshr@...com>
CC: Frode Isaksen <fisaksen@...libre.com>,
Mark Brown <broonie@...nel.org>,
Richard Weinberger <richard@....at>,
David Woodhouse <dwmw2@...radead.org>,
Brian Norris <computersforpeace@...il.com>,
Marek Vasut <marek.vasut@...il.com>,
<linux-mtd@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
<linux-omap@...r.kernel.org>, <linux-spi@...r.kernel.org>
Subject: Re: [RFC PATCH 2/2] mtd: devices: m25p80: Enable spi-nor bounce
buffer support
Le 02/03/2017 à 15:29, Boris Brezillon a écrit :
> On Thu, 2 Mar 2017 19:24:43 +0530
> Vignesh R <vigneshr@...com> wrote:
>
>>>>>>
>>>>> Not really, I am debugging another issue with UBIFS on DRA74 EVM (ARM
>>>>> cortex-a15) wherein pages allocated by vmalloc are in highmem region
>>>>> that are not addressable using 32 bit addresses and is backed by LPAE.
>>>>> So, a 32 bit DMA cannot access these buffers at all.
>>>>> When dma_map_sg() is called to map these pages by spi_map_buf() the
>>>>> physical address is just truncated to 32 bit in pfn_to_dma() (as part of
>>>>> dma_map_sg() call). This results in random crashes as DMA starts
>>>>> accessing random memory during SPI read.
>>>>>
>>>>> IMO, there may be more undiscovered caveat with using dma_map_sg() for
>>>>> non kmalloc'd buffers and its better that spi-nor starts handling these
>>>>> buffers instead of relying on spi_map_msg() and working around every
>>>>> time something pops up.
>>>>>
>>>> Ok, I had a closer look at the SPI framework, and it seems there's a
>>>> way to tell to the core that a specific transfer cannot use DMA
>>>> (->can_dam()). The first thing you should do is fix the spi-davinci
>>>> driver:
>>>>
>>>> 1/ implement ->can_dma()
>>>> 2/ patch davinci_spi_bufs() to take the decision to do DMA or not on a
>>>> per-xfer basis and not on a per-device basis
>>>>
>>
>> This would lead to poor perf defeating entire purpose of using DMA.
>
> Hm, that's not really true. For all cases where you have a DMA-able
> buffer it would still use DMA. For other cases (like the UBI+SPI-NOR
> case we're talking about here), yes, it will be slower, but slower is
> still better than buggy.
> So, in any case, I think the fixes pointed by Frode are needed.
>
>>
>>>> Then we can start thinking about how to improve perfs by using a bounce
>>>> buffer for large transfers, but I'm still not sure this should be done
>>>> at the MTD level...
>>
>> If its at SPI level, then I guess each individual drivers which cannot
>> handle vmalloc'd buffers will have to implement bounce buffer logic.
>
> Well, that's my opinion. The only one that can decide when to do
> PIO, when to use DMA or when to use a bounce buffer+DMA is the SPI
> controller.
> If you move this logic to the SPI NOR layer, you'll have to guess what
> is the best approach, and I fear the decision will be wrong on some
> platforms (leading to perf degradation).
>
True. For instance, Atmel SAMA5* SoCs don't need this bounce buffer
since their L1 data cache uses a PIPT scheme.
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0433c/CHDFAHBD.html
"""
2.1.4. Data side memory system
Data Cache Unit
The Data Cache Unit (DCU) consists of the following sub-blocks:
The Level 1 (L1) data cache controller, which generates the control
signals for the associated embedded tag, data, and dirty memory (RAMs)
and arbitrates between the different sources requesting access to the
memory resources. The data cache is 4-way set associative and uses a
Physically Indexed Physically Tagged (PIPT) scheme for lookup which
enables unambiguous address management in the system.
"""
So for those SoCs, spi_map_msg() should be safe to handle vmalloc'ed
buffers since they don't have to worry about the cache aliases issue or
address truncation.
That's why I don't think setting the SNOR_F_USE_BOUNCE_BUFFER in *all*
cases in m25p80 is the right solution since it would not fair to degrade
the performances of some devices when it's not needed hence not justified.
I still agree with the idea of patch 1 but about patch 2, if m25p80
users want to take advantage of this new spi-nor bounce buffer, we have
to agree on a reliable mechanism that clearly tells whether or not the
SNOR_F_USE_BOUNCE_BUFFER is to be set from m25p80.
> You're mentioning code duplication in each SPI controller, I agree,
> this is far from ideal, but what you're suggesting is not necessarily
> better. What if another SPI user starts passing vmalloc-ed buffers to
> the SPI controller? You'll have to duplicate the bounce-buffer logic in
> this user as well.
>
>>
>> Or SPI core can be extended in a way similar to this RFC. That is, SPI
>> master driver will set a flag to request SPI core to use of bounce
>> buffer for vmalloc'd buffers. And spi_map_buf() just uses bounce buffer
>> in case buf does not belong to kmalloc region based on the flag.
>
> That's a better approach IMHO. Note that the decision should not only
> be based on the buffer type, but also on the transfer length and/or
> whether the controller supports transferring non physically contiguous
> buffers.
>
> Maybe we should just extend ->can_dma() to let the core know if it
> should use a bounce buffer.
>
> Regarding the bounce buffer allocation logic, I'm not sure how it
> should be done. The SPI user should be able to determine a max transfer
> len (at least this is the case for SPI NORs) and inform the SPI layer
> about this boundary so that the SPI core can allocate a bounce buffer
> of this size. But we also have limitations at the SPI master level
> (->max_transfer_size(), ->max_message_size()).
>
>
>
Powered by blists - more mailing lists