lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1551479201.6059.23.camel@impinj.com>
Date:   Fri, 1 Mar 2019 22:26:41 +0000
From:   Trent Piepho <tpiepho@...inj.com>
To:     "s.hauer@...gutronix.de" <s.hauer@...gutronix.de>,
        "jiada_wang@...tor.com" <jiada_wang@...tor.com>,
        "fabio.estevam@....com" <fabio.estevam@....com>,
        "broonie@...nel.org" <broonie@...nel.org>
CC:     "linux-spi@...r.kernel.org" <linux-spi@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "stefan@...er.ch" <stefan@...er.ch>
Subject: Re: Bug in spi: imx: Add support for SPI Slave mode

On Wed, 2019-02-27 at 10:55 +0900, Jiada Wang wrote:
> Hi Trent
> 
> Thanks for reporting

Thanks for the information, it was very helpful!

> 
> in the commit message of commit ("spi: imx: Add support for SPI Slave 
> mode"), it mentions
> 
> "The stale data in RXFIFO will be dropped when the Slave does any new 
> transfer"

Sorry, somehow I totally missed that.

> 
> > If it's necessary to empty the fifo before an xfer, then how did this
> > driver ever work before this change?  I see no change anywhere else
> > that would make this a new requirement.
> > 
> 
> only slave mode needs to flush stale data in RXFIFO,

Ok, so maybe I can move this into a slave mode only path.  That could
avoid the check in master mode, if it's not necessary there.

Ok course, that doesn't really fix my underlying "dma then pio"
problem, as even if it avoids corrupting memory, as there is still data
in the RX fifo where there should not be.

> Currently I am not able to test and this patch was created several years 
> before, but as I can recall, the purpose is to flush stale data in RXFIFO
> before spi->rx_buf is set, but seems there is bug, after the first xfer,
> rx_buf will always point to somewhere, which can lead to memory corruption.
> 
> > In my test, switching from DMA to PIO, which happens if the 1st xfer is
> >   large enough to pass a >fifo/2 size test, and uses DMA, and the 2nd
> > xfer is smaller, and will use PIO, results in the PIO xfer trying to
> > empty the rx buffer in this code above.  If there is not enough space
> > in the xfer's rx buffer, and there is no reason for there to be any
> > space at all, it clobbers memory.
> > 
> > I suspect the author of this wasn't aware that spi_imx->rx() will write
> > the data received into the current xfer's rx buffer rather than throw
> > it away, and never tested this actually getting used for a transfer
> > where the rx data mattered.
> > 
> 
> yes, you are right,
> as I don't have access to HW, can you please submit a fix
> (for example reset rx_buf after each xfer?)
> > Still, I'd like to know why the flush rx thing is even here.  Nothing
> > in the commit message or any discussion I could find addressed this.
> > 
> 
> master side may xfer data longer than slave side is expecting,
> the extra data received in RXFIFO, need to be dropped before new xfer

Yes, that makes a lot of sense why this would be needed.

I wonder if putting the flush after the xfer might be too early.  If
the master is sending more data than expected, how do we knows it has
stopped when we do the post xfer flush?  Perhaps the master sends more
after the flush is finished.  Is there some point where additional data
into the RX fifo is disabled?

Unfortunately, I don't have hardware to test slave mode.  Or even
master mode RX, as my hardware is TX only.  The imx spi is just
receiving zeros to put into a spi core provided dummy buffer.  I can
stop overflow the buffer, but I wouldn't know if the data that is in it
is correct.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ