[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240213173110.00007855@Huawei.com>
Date: Tue, 13 Feb 2024 17:31:10 +0000
From: Jonathan Cameron <Jonathan.Cameron@...wei.com>
To: Nuno Sá <noname.nuno@...il.com>
CC: David Lechner <dlechner@...libre.com>, Mark Brown <broonie@...nel.org>,
Martin Sperl <kernel@...tin.sperl.org>, David Jander <david@...tonic.nl>,
Jonathan Cameron <jic23@...nel.org>, Michael Hennerich
<michael.hennerich@...log.com>, Nuno Sá
<nuno.sa@...log.com>, Alain Volmat <alain.volmat@...s.st.com>, "Maxime
Coquelin" <mcoquelin.stm32@...il.com>, Alexandre Torgue
<alexandre.torgue@...s.st.com>, <linux-spi@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linux-stm32@...md-mailman.stormreply.com>,
<linux-arm-kernel@...ts.infradead.org>, <linux-iio@...r.kernel.org>
Subject: Re: [PATCH 5/5] iio: adc: ad7380: use spi_optimize_message()
On Tue, 13 Feb 2024 17:08:19 +0100
Nuno Sá <noname.nuno@...il.com> wrote:
> On Tue, 2024-02-13 at 09:27 -0600, David Lechner wrote:
> > On Tue, Feb 13, 2024 at 3:47 AM Nuno Sá <noname.nuno@...ilcom> wrote:
> > >
> > > On Mon, 2024-02-12 at 17:26 -0600, David Lechner wrote:
> > > > This modifies the ad7380 ADC driver to use spi_optimize_message() to
> > > > optimize the SPI message for the buffered read operation. Since buffered
> > > > reads reuse the same SPI message for each read, this can improve
> > > > performance by reducing the overhead of setting up some parts the SPI
> > > > message in each spi_sync() call.
> > > >
> > > > Signed-off-by: David Lechner <dlechner@...libre.com>
> > > > ---
> > > > drivers/iio/adc/ad7380.c | 52 +++++++++++++++++++++++++++++++++++++++++--
> > > > ----
> > > > -
> > > > 1 file changed, 45 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/drivers/iio/adc/ad7380.c b/drivers/iio/adc/ad7380.c
> > > > index abd746aef868..5c5d2642a474 100644
> > > > --- a/drivers/iio/adc/ad7380.c
> > > > +++ b/drivers/iio/adc/ad7380.c
> > > > @@ -133,6 +133,7 @@ struct ad7380_state {
> > > > struct spi_device *spi;
> > > > struct regulator *vref;
> > > > struct regmap *regmap;
> > > > + struct spi_message *msg;
> > > > /*
> > > > * DMA (thus cache coherency maintenance) requires the
> > > > * transfer buffers to live in their own cache lines.
> > > > @@ -231,19 +232,55 @@ static int ad7380_debugfs_reg_access(struct iio_dev
> > > > *indio_dev, u32 reg,
> > > > return ret;
> > > > }
> > > >
> > > > +static int ad7380_buffer_preenable(struct iio_dev *indio_dev)
> > > > +{
> > > > + struct ad7380_state *st = iio_priv(indio_dev);
> > > > + struct spi_transfer *xfer;
> > > > + int ret;
> > > > +
> > > > + st->msg = spi_message_alloc(1, GFP_KERNEL);
> > > > + if (!st->msg)
> > > > + return -ENOMEM;
> > > > +
> > > > + xfer = list_first_entry(&st->msg->transfers, struct spi_transfer,
> > > > + transfer_list);
> > > > +
> > > > + xfer->bits_per_word = st->chip_info->channels[0].scan_type.realbits;
> > > > + xfer->len = 4;
> > > > + xfer->rx_buf = st->scan_data.raw;
> > > > +
> > > > + ret = spi_optimize_message(st->spi, st->msg);
> > > > + if (ret) {
> > > > + spi_message_free(st->msg);
> > > > + return ret;
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > +static int ad7380_buffer_postdisable(struct iio_dev *indio_dev)
> > > > +{
> > > > + struct ad7380_state *st = iio_priv(indio_dev);
> > > > +
> > > > + spi_unoptimize_message(st->msg);
> > > > + spi_message_free(st->msg);
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > >
> > > Not such a big deal but unless I'm missing something we could have the
> > > spi_message (+ the transfer) statically allocated in struct ad7380_state and
> > > do
> > > the optimize only once at probe (naturally with proper devm action for
> > > unoptimize). Then we would not need to this for every buffer enable +
> > > disable. I
> > > know in terms of performance it won't matter but it would be less code I
> > > guess.
> > >
> > > Am I missing something?
> >
> > No, your understanding is correct for the current state of everything
> > in this series. So, we could do as you suggest, but I have a feeling
> > that future additions to this driver might require that it gets
> > changed back this way eventually.
>
> Hmm, not really sure about that as chip_info stuff is always our friend :). And
> I'm anyways of the opinion of keeping things simpler and start to evolve when
> really needed (because often we never really need to evolve). But bah, as I
> said... this is really not a big deal.
>
Oops should have read Nuno's review before replying!
I'd rather we embedded it for now and did the optimization at probe.
Whilst it's a lot of work per transfer it's not enough to worry about delaying
it until preenable(). Easy to make that move and take it dynamic when
driver changes need it. In meantime, I don't want lots of other drivers
picking up this pattern when they may never need the complexity of
making things more dynamic.
Jonathan
> - Nuno Sá
>
Powered by blists - more mailing lists