lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <341ef45d-bad5-fd7c-aa05-807041c35f42@baylibre.com>
Date:   Thu, 2 Mar 2017 16:03:17 +0100
From:   Frode Isaksen <fisaksen@...libre.com>
To:     Boris Brezillon <boris.brezillon@...e-electrons.com>,
        Vignesh R <vigneshr@...com>
Cc:     Mark Brown <broonie@...nel.org>,
        Cyrille Pitchen <cyrille.pitchen@...el.com>,
        Richard Weinberger <richard@....at>,
        David Woodhouse <dwmw2@...radead.org>,
        Brian Norris <computersforpeace@...il.com>,
        Marek Vasut <marek.vasut@...il.com>,
        linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-omap@...r.kernel.org, linux-spi@...r.kernel.org
Subject: Re: [RFC PATCH 2/2] mtd: devices: m25p80: Enable spi-nor bounce
 buffer support



On 02/03/2017 15:29, Boris Brezillon wrote:
> On Thu, 2 Mar 2017 19:24:43 +0530
> Vignesh R <vigneshr@...com> wrote:
>
>>>>>>     
>>>>> Not really, I am debugging another issue with UBIFS on DRA74 EVM (ARM
>>>>> cortex-a15) wherein pages allocated by vmalloc are in highmem region
>>>>> that are not addressable using 32 bit addresses and is backed by LPAE.
>>>>> So, a 32 bit DMA cannot access these buffers at all.
>>>>> When dma_map_sg() is called to map these pages by spi_map_buf() the
>>>>> physical address is just truncated to 32 bit in pfn_to_dma() (as part of
>>>>> dma_map_sg() call). This results in random crashes as DMA starts
>>>>> accessing random memory during SPI read.
>>>>>
>>>>> IMO, there may be more undiscovered caveat with using dma_map_sg() for
>>>>> non kmalloc'd buffers and its better that spi-nor starts handling these
>>>>> buffers instead of relying on spi_map_msg() and working around every
>>>>> time something pops up.
>>>>>  
>>>> Ok, I had a closer look at the SPI framework, and it seems there's a
>>>> way to tell to the core that a specific transfer cannot use DMA
>>>> (->can_dam()). The first thing you should do is fix the spi-davinci
>>>> driver:
>>>>
>>>> 1/ implement ->can_dma()
>>>> 2/ patch davinci_spi_bufs() to take the decision to do DMA or not on a
>>>>    per-xfer basis and not on a per-device basis
>>>>  
>> This would lead to poor perf defeating entire purpose of using DMA.
> Hm, that's not really true. For all cases where you have a DMA-able
> buffer it would still use DMA. For other cases (like the UBI+SPI-NOR
> case we're talking about here), yes, it will be slower, but slower is
> still better than buggy.
> So, in any case, I think the fixes pointed by Frode are needed.
Also, I think the UBIFS layer only uses vmalloc'ed buffers during mount/unmount and not for read/write, so the performance hit is not that big. In most cases the buffer is the size of the erase block, but I've seen vmalloc'ed buffer of size only 11 bytes ! So, to optimize this, the best solution is probably to change how the UBIFS layer is using vmalloc'ed vs kmalloc'ed buffers, since vmalloc'ed should only be used for large (> 128K) buffers.

Frode
>
>>>> Then we can start thinking about how to improve perfs by using a bounce
>>>> buffer for large transfers, but I'm still not sure this should be done
>>>> at the MTD level...  
>> If its at SPI level, then I guess each individual drivers which cannot
>> handle vmalloc'd buffers will have to implement bounce buffer logic.
> Well, that's my opinion. The only one that can decide when to do
> PIO, when to use DMA or when to use a bounce buffer+DMA is the SPI
> controller.
> If you move this logic to the SPI NOR layer, you'll have to guess what
> is the best approach, and I fear the decision will be wrong on some
> platforms (leading to perf degradation).
>
> You're mentioning code duplication in each SPI controller, I agree,
> this is far from ideal, but what you're suggesting is not necessarily
> better. What if another SPI user starts passing vmalloc-ed buffers to
> the SPI controller? You'll have to duplicate the bounce-buffer logic in
> this user as well.
>
>> Or SPI core can be extended in a way similar to this RFC. That is, SPI
>> master driver will set a flag to request SPI core to use of bounce
>> buffer for vmalloc'd buffers. And spi_map_buf() just uses bounce buffer
>> in case buf does not belong to kmalloc region based on the flag.
> That's a better approach IMHO. Note that the decision should not only
> be based on the buffer type, but also on the transfer length and/or
> whether the controller supports transferring non physically contiguous
> buffers.
>
> Maybe we should just extend ->can_dma() to let the core know if it
> should use a bounce buffer.
>
> Regarding the bounce buffer allocation logic, I'm not sure how it
> should be done. The SPI user should be able to determine a max transfer
> len (at least this is the case for SPI NORs) and inform the SPI layer
> about this boundary so that the SPI core can allocate a bounce buffer
> of this size. But we also have limitations at the SPI master level
> (->max_transfer_size(), ->max_message_size()).
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ