[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241124170212.21442c3e@jic23-huawei>
Date: Sun, 24 Nov 2024 17:02:12 +0000
From: Jonathan Cameron <jic23@...nel.org>
To: David Lechner <dlechner@...libre.com>
Cc: Mark Brown <broonie@...nel.org>, Rob Herring <robh@...nel.org>,
Krzysztof Kozlowski <krzk+dt@...nel.org>, Conor Dooley
<conor+dt@...nel.org>, Nuno Sá <nuno.sa@...log.com>, Uwe
Kleine-König <ukleinek@...nel.org>, Michael Hennerich
<Michael.Hennerich@...log.com>, Lars-Peter Clausen <lars@...afoo.de>, David
Jander <david@...tonic.nl>, Martin Sperl <kernel@...tin.sperl.org>,
linux-spi@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-iio@...r.kernel.org,
linux-pwm@...r.kernel.org
Subject: Re: [PATCH v5 08/16] spi: axi-spi-engine: implement offload support
On Fri, 15 Nov 2024 14:18:47 -0600
David Lechner <dlechner@...libre.com> wrote:
> Implement SPI offload support for the AXI SPI Engine. Currently, the
> hardware only supports triggering offload transfers with a hardware
> trigger so attempting to use an offload message in the regular SPI
> message queue will fail. Also, only allows streaming rx data to an
> external sink, so attempts to use a rx_buf in the offload message will
> fail.
>
> Signed-off-by: David Lechner <dlechner@...libre.com>
Locally this patch is fine, but it has made me wonder if the allocation
strategy for priv makes sense. It's not wrong, but perhaps more complex
than it needs to be.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@...wei.com>
> diff --git a/drivers/spi/spi-axi-spi-engine.c b/drivers/spi/spi-axi-spi-engine.c
> index 9386ddc4714e..172a9f1f1ead 100644
> --- a/drivers/spi/spi-axi-spi-engine.c
> +++ b/drivers/spi/spi-axi-spi-engine.c
> static void spi_engine_release_hw(void *p)
> {
> struct spi_engine *spi_engine = p;
> @@ -675,8 +921,7 @@ static int spi_engine_probe(struct platform_device *pdev)
> struct spi_engine *spi_engine;
> struct spi_controller *host;
> unsigned int version;
> - int irq;
> - int ret;
> + int irq, ret;
>
> irq = platform_get_irq(pdev, 0);
> if (irq < 0)
> @@ -691,6 +936,46 @@ static int spi_engine_probe(struct platform_device *pdev)
> spin_lock_init(&spi_engine->lock);
> init_completion(&spi_engine->msg_complete);
>
> + /*
> + * REVISIT: for now, all SPI Engines only have one offload. In the
> + * future, this should be read from a memory mapped register to
> + * determine the number of offloads enabled at HDL compile time. For
> + * now, we can tell if an offload is present if there is a trigger
> + * source wired up to it.
> + */
> + if (device_property_present(&pdev->dev, "trigger-sources")) {
> + struct spi_engine_offload *priv;
> +
> + spi_engine->offload =
> + devm_spi_offload_alloc(&pdev->dev,
> + sizeof(struct spi_engine_offload));
> + if (IS_ERR(spi_engine->offload))
> + return PTR_ERR(spi_engine->offload);
> +
> + priv = spi_engine->offload->priv;
With the separate allocations of offlaod and priv back in patch 1 this
all feels more complex than it should be. Maybe we should just alloate
priv here and set it up via spi_engine->offload->priv = ...
Other elements of spi_engine->offloads have to be set directly here
for it to work so setting priv as well seems sensible.
> + priv->spi_engine = spi_engine;
> + priv->offload_num = 0;
> +
> + spi_engine->offload->ops = &spi_engine_offload_ops;
> + spi_engine->offload_caps = SPI_OFFLOAD_CAP_TRIGGER;
> +
> + if (device_property_match_string(&pdev->dev, "dma-names", "offload0-rx") >= 0) {
> + spi_engine->offload_caps |= SPI_OFFLOAD_CAP_RX_STREAM_DMA;
> + spi_engine->offload->xfer_flags |= SPI_OFFLOAD_XFER_RX_STREAM;
> + }
> +
> + if (device_property_match_string(&pdev->dev, "dma-names", "offload0-tx") >= 0) {
> + spi_engine->offload_caps |= SPI_OFFLOAD_CAP_TX_STREAM_DMA;
> + spi_engine->offload->xfer_flags |= SPI_OFFLOAD_XFER_TX_STREAM;
> + } else {
> + /*
> + * HDL compile option to enable TX DMA stream also disables
> + * the SDO memory, so can't do both at the same time.
> + */
> + spi_engine->offload_caps |= SPI_OFFLOAD_CAP_TX_STATIC_DATA;
> + }
> + }
> +
Powered by blists - more mailing lists