lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 28 May 2018 18:09:15 +0200
From:   Nicolas Ferre <nicolas.ferre@...rochip.com>
To:     Peter Rosin <peda@...ntia.se>,
        Boris Brezillon <boris.brezillon@...tlin.com>
CC:     Tudor Ambarus <tudor.ambarus@...rochip.com>,
        Ludovic Desroches <ludovic.desroches@...rochip.com>,
        Alexandre Belloni <alexandre.belloni@...tlin.com>,
        Marek Vasut <marek.vasut@...il.com>,
        Josh Wu <rainyfeeling@...look.com>,
        Cyrille Pitchen <cyrille.pitchen@...ev4u.fr>,
        <linux-kernel@...r.kernel.org>, <linux-mtd@...ts.infradead.org>,
        Richard Weinberger <richard@....at>,
        Brian Norris <computersforpeace@...il.com>,
        David Woodhouse <dwmw2@...radead.org>,
        <linux-arm-kernel@...ts.infradead.org>,
        Eugen Hristev <eugen.hristev@...rochip.com>
Subject: Re: [PATCH] mtd: nand: raw: atmel: add module param to avoid using
 dma

On 28/05/2018 at 17:52, Peter Rosin wrote:
> On 2018-05-28 16:27, Boris Brezillon wrote:

[..]

>> Could it just be that you're reaching the DDR bus limit. As I said
>> previously, when you go through the CPU, and assuming you're consuming
>> the data directly, you have:
>>
>> 1/ NFC SRAM -> CPU
>> 2/ CPU -> L1 data cache --write-back--> DRAM
>> 3/ L1-cache -> CPU
>>
>> While, if you use DMA you get:
>>
>> 1/ NFC SRAM -> DRAM
>> 2/ SDRAM -> L1 data cache -> CPU
>>
>> So, if you're approaching the limit of (LP)DDR bandwidth, using the CPU
>> might make things a bit better. Still, if the limitation really comes
>> from the DDR bus, my opinion is that you should maybe use a smaller
>> resolution or use a more compact pixel format (RGB565?).
> 
> The issue is still there if I use a CLUT mode instead of rgb565, which is
> what I normally use (and what I would like to use, CLUT is just alien and
> a pain these days).
> 
> The panels we are using only supports one resolution (each), but the issue
> is there with both 1920x1080@...pp and 1024x768@...p (~60Hz).
> 
>> Did you calculate how much of the bandwidth is taken by the HLCDC
>> block and compared it to the max (LP)DDR bandwidth?
> 
> I did, but don't remember the exact details. There is some room even for
> 1920x1080@...pp, but not oceans of it. We were a bit uncertain if 16bpp
> would be possible, and in fact that was the reason I worked on CLUT
> support for atmel-hlcdc last year. But since the problem persists with
> much less memory pressure as well, I don't think that's it either.

Just jumping in the conversation with another perspective (maybe already 
tried actually).

Can you try to make all that you can to maximize the blanking period of 
your screen (some are more tolerant than others according to that). By 
doing so, you would allow the LCD FIFO to recover better after each 
line. You might loose some columns on the side of your display but it 
would give us a good idea of how far we are from getting rid of those 
annoying LCD reset glitches (that are due to underruns on LCD FIFO).

> Admittedly I have not tested these AHB matrix tricks with a smaller
> panel (it would take a bit of work to arrange for those tests), but the
> issue was there when I last tried (using defaults).

If what I said earlier has an impact, reducing the panel size will also 
make a difference. But the question is how small is "smaller" ;-)

Best regards,
-- 
Nicolas Ferre

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ