lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200529081204.e2j5unvvfikr2y7v@mobilestation>
Date:   Fri, 29 May 2020 11:12:04 +0300
From:   Serge Semin <Sergey.Semin@...kalelectronics.ru>
To:     Andy Shevchenko <andy.shevchenko@...il.com>
CC:     Serge Semin <fancer.lancer@...il.com>,
        Mark Brown <broonie@...nel.org>,
        Georgy Vlasov <Georgy.Vlasov@...kalelectronics.ru>,
        Ramil Zaripov <Ramil.Zaripov@...kalelectronics.ru>,
        Alexey Malahov <Alexey.Malahov@...kalelectronics.ru>,
        Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
        Arnd Bergmann <arnd@...db.de>, Feng Tang <feng.tang@...el.com>,
        Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        Rob Herring <robh+dt@...nel.org>, <linux-mips@...r.kernel.org>,
        devicetree <devicetree@...r.kernel.org>,
        linux-spi <linux-spi@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 03/16] spi: dw: Locally wait for the DMA transactions
 completion

On Fri, May 29, 2020 at 10:55:32AM +0300, Andy Shevchenko wrote:
> On Fri, May 29, 2020 at 7:02 AM Serge Semin
> <Sergey.Semin@...kalelectronics.ru> wrote:
> >
> > Even if DMA transactions are finished it doesn't mean that the SPI
> > transfers are also completed. It's specifically concerns the Tx-only
> > SPI transfers, since there might be data left in the SPI Tx FIFO after
> > the DMA engine notifies that the Tx DMA procedure is done. In order to
> > completely fix the problem first the driver has to wait for the DMA
> > transaction completion, then for the corresponding SPI operations to be
> > finished. In this commit we implement the former part of the solution.
> >
> > Note we can't just move the SPI operations wait procedure to the DMA
> > completion callbacks, since these callbacks might be executed in the
> > tasklet context (and they will be in case of the DW DMA). In case of
> > slow SPI bus it can cause significant system performance drop.
> 

> I read commit message, I read the code. What's going on here since you
> repeated xfer_completion (and its wait routine) from SPI core and I'm
> wondering what happened to it? Why we are not calling
> spi_finalize_current_transfer()?

We discussed that in v4. You complained about using ndelay() for slow SPI bus,
which may cause too long atomic context execution. We agreed. Since we can't wait
in the tasklet context and using a dedicated kernel thread for waiting would be too
much, Me and Mark agreed, that even if it causes us of the local wait-function
re-implementation the best approach would be not to use the generic
spi_transfer_wait() method, but instead wait for the DMA transactions locally
in the DMA driver and just return 0 from the transfer_one callback indicating
that the SPI transfer is finished and there is no need for SPI core to wait. As
a lot of DMA-based SPI drivers do.

If you don't understand what the commit message says, just say so. I'll
reformulate it.

-Sergey

> 
> ...
> 
> >         dws->master->cur_msg->status = -EIO;
> > -       spi_finalize_current_transfer(dws->master);
> > +       complete(&dws->dma_completion);
> >         return IRQ_HANDLED;
> 
> ...
> 
> > +static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer)
> > +{
> > +       unsigned long long ms;
> > +
> > +       ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE;
> > +       do_div(ms, xfer->effective_speed_hz);
> > +       ms += ms + 200;
> > +
> > +       if (ms > UINT_MAX)
> > +               ms = UINT_MAX;
> > +
> > +       ms = wait_for_completion_timeout(&dws->dma_completion,
> > +                                        msecs_to_jiffies(ms));
> > +
> > +       if (ms == 0) {
> > +               dev_err(&dws->master->cur_msg->spi->dev,
> > +                       "DMA transaction timed out\n");
> > +               return -ETIMEDOUT;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> >  /*
> >   * dws->dma_chan_busy is set before the dma transfer starts, callback for tx
> >   * channel will clear a corresponding bit.
> > @@ -155,7 +184,7 @@ static void dw_spi_dma_tx_done(void *arg)
> >                 return;
> >
> >         dw_writel(dws, DW_SPI_DMACR, 0);
> > -       spi_finalize_current_transfer(dws->master);
> > +       complete(&dws->dma_completion);
> >  }
> >
> >  static struct dma_async_tx_descriptor *dw_spi_dma_prepare_tx(struct dw_spi *dws,
> > @@ -204,7 +233,7 @@ static void dw_spi_dma_rx_done(void *arg)
> >                 return;
> >
> >         dw_writel(dws, DW_SPI_DMACR, 0);
> > -       spi_finalize_current_transfer(dws->master);
> > +       complete(&dws->dma_completion);
> >  }
> 
> 
> -- 
> With Best Regards,
> Andy Shevchenko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ