lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zu2O5wWGyhRFkBnO@smile.fi.intel.com>
Date: Fri, 20 Sep 2024 18:04:07 +0300
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To: Serge Semin <fancer.lancer@...il.com>
Cc: Hans de Goede <hdegoede@...hat.com>, Viresh Kumar <vireshk@...nel.org>,
	Vinod Koul <vkoul@...nel.org>,
	Ilpo Järvinen <ilpo.jarvinen@...ux.intel.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Jiri Slaby <jirislaby@...nel.org>, dmaengine@...r.kernel.org,
	linux-serial@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] dmaengine: dw: Fix sys freeze and XFER-bit set error
 for UARTs

On Fri, Sep 20, 2024 at 05:56:23PM +0300, Serge Semin wrote:
> On Fri, Sep 20, 2024 at 05:24:37PM +0300, Andy Shevchenko wrote:
> > On Fri, Sep 20, 2024 at 12:33:51PM +0300, Serge Semin wrote:
> > > On Mon, Sep 16, 2024 at 04:01:08PM +0300, Andy Shevchenko wrote:

...

> > > There is another problem caused by the too slow coherent memory IO on
> > > my device. Due to that the data gets to be copied too slow in the
> > > __dma_rx_complete()->tty_insert_flip_string() call. As a result a fast
> > > incoming traffic overflows the DW UART inbound FIFO. But that can be
> > > worked around by decreasing the Rx DMA-buffer size. (There are some
> > > more generic fixes possible, but they haven't shown to be as effective
> > > as the buffer size reduction.)
> 
> > This sounds like a specific quirk for a specific platform. In case you
> > are going to address that make sure it does not come to be generic.
> 
> Of course reducing the buffer size is the platform-specific quirk.
> 
> A more generic fix could be to convert the DMA-buffer to being
> allocated from the DMA-noncoherent memory _if_ the DMA performed by
> the DW DMA-device is non-coherent anyway. In that case the
> DMA-coherent memory buffer is normally allocated from the
> non-cacheable memory pool, access to which is very-very slow even on
> the Intel/AMD devices.  So using the cacheable buffer for DMA, then
> manually invalidating the cache for it before DMA IOs and prefetching
> the data afterwards seemed as a more universal solution. But my tests
> showed that such approach doesn't fully solve the problem on my
> device. That said that approach permitted to execute data-safe UART
> transfers for up to 460Kbit/s, meanwhile just reducing the buffer from
> 16K to 512b - for up to 2.0Mbaud/s. It's still not enough since the
> device is capable to work on the speed 3Mbit/s, but it's better than
> 460Kbaud/s.

Ah, interesting issue.  Good lick with solving it the best way you can.
Any yes, you're right that 2M support is better than 0.5M.

-- 
With Best Regards,
Andy Shevchenko



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ