[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240510-dlech-mainline-spi-engine-offload-2-v2-2-8707a870c435@baylibre.com>
Date: Fri, 10 May 2024 19:44:25 -0500
From: David Lechner <dlechner@...libre.com>
To: Mark Brown <broonie@...nel.org>,
Jonathan Cameron <jic23@...nel.org>,
Rob Herring <robh@...nel.org>,
Krzysztof Kozlowski <krzk+dt@...nel.org>,
Conor Dooley <conor+dt@...nel.org>,
Nuno Sá <nuno.sa@...log.com>
Cc: David Lechner <dlechner@...libre.com>,
Michael Hennerich <Michael.Hennerich@...log.com>,
Lars-Peter Clausen <lars@...afoo.de>,
David Jander <david@...tonic.nl>,
Martin Sperl <kernel@...tin.sperl.org>,
linux-spi@...r.kernel.org,
devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-iio@...r.kernel.org
Subject: [PATCH RFC v2 2/8] spi: add basic support for SPI offloading
SPI offloading is a feature that allows the SPI controller to perform
complex transfers without CPU intervention. This is useful, e.g. for
high-speed data acquisition.
This patch adds the basic infrastructure to support SPI offloading. It
introduces new callbacks that are to be implemented by controllers with
offload capabilities.
On SPI device probe, the standard spi-offloads devicetree property is
parsed and passed to the controller driver to reserve the resources
requested by the peripheral via the map_channel() callback.
The peripheral driver can then use spi_offload_prepare() to load a SPI
message into the offload hardware.
If the controller supports it, this message can then be passed to the
SPI message queue as if it was a normal message. Future patches will
will also implement a way to use a hardware trigger to start the message
transfers rather than going through the message queue.
Signed-off-by: David Lechner <dlechner@...libre.com>
---
v2 changes:
This is a rework of "spi: add core support for controllers with offload
capabilities" from v1.
The spi_offload_get() function that Nuno didn't like is gone. Instead,
there is now a mapping callback that uses the new generic devicetree
binding to request resources automatically when a SPI device is probed.
The spi_offload_enable/disable() functions for dealing with hardware
triggers are deferred to a separate patch.
This leaves adding spi_offload_prepare/unprepare() which have been
reworked to be a bit more robust.
In the previous review, Mark suggested that these functions should not
be separate from the spi_[un]optimize() functions. I understand the
reasoning behind that. However, it seems like there are two different
kinds of things going on here. Currently, spi_optimize() only performs
operations on the message data structures and doesn't poke any hardware.
This makes it free to be use by any peripheral without worrying about
tying up any hardware resources while the message is "optimized". On the
other hand, spi_offload_prepare() is poking hardware, so we need to be
more careful about how it is used. And in these cases, we need a way to
specify exactly which hardware resources it should use, which it is
currently doing with the extra ID parameter.
---
drivers/spi/spi.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++++
include/linux/spi/spi.h | 57 +++++++++++++++++++++++++++
2 files changed, 157 insertions(+)
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 289feccca376..54b814cea54c 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -2477,6 +2477,28 @@ static int of_spi_parse_dt(struct spi_controller *ctlr, struct spi_device *spi,
of_spi_parse_dt_cs_delay(nc, &spi->cs_hold, "spi-cs-hold-delay-ns");
of_spi_parse_dt_cs_delay(nc, &spi->cs_inactive, "spi-cs-inactive-delay-ns");
+ /* Offloads */
+ rc = of_property_count_u32_elems(nc, "spi-offloads");
+ if (rc > 0) {
+ int num_ch = rc;
+
+ if (!ctlr->offload_ops) {
+ dev_err(&ctlr->dev, "SPI controller doesn't support offloading\n");
+ return -EINVAL;
+ }
+
+ for (idx = 0; idx < num_ch; idx++) {
+ of_property_read_u32_index(nc, "spi-offloads", idx, &value);
+
+ rc = ctlr->offload_ops->map_channel(spi, idx, value);
+ if (rc) {
+ dev_err(&ctlr->dev, "Failed to map offload channel %d: %d\n",
+ value, rc);
+ return rc;
+ }
+ }
+ }
+
return 0;
}
@@ -3231,6 +3253,11 @@ static int spi_controller_check_ops(struct spi_controller *ctlr)
}
}
+ if (ctlr->offload_ops && !(ctlr->offload_ops->map_channel &&
+ ctlr->offload_ops->prepare &&
+ ctlr->offload_ops->unprepare))
+ return -EINVAL;
+
return 0;
}
@@ -4708,6 +4735,79 @@ int spi_write_then_read(struct spi_device *spi,
}
EXPORT_SYMBOL_GPL(spi_write_then_read);
+/**
+ * spi_offload_prepare - prepare offload hardware for a transfer
+ * @spi: The spi device to use for the transfers.
+ * @id: Unique identifier for SPI device with more than one offload.
+ * @msg: The SPI message to use for the offload operation.
+ *
+ * Requests an offload instance with the specified ID and programs it with the
+ * provided message.
+ *
+ * The message must not be pre-optimized (do not call spi_optimize_message() on
+ * the message).
+ *
+ * Calls must be balanced with spi_offload_unprepare().
+ *
+ * Return: 0 on success, else a negative error code.
+ */
+int spi_offload_prepare(struct spi_device *spi, unsigned int id,
+ struct spi_message *msg)
+{
+ struct spi_controller *ctlr = spi->controller;
+ int ret;
+
+ if (!ctlr->offload_ops)
+ return -EOPNOTSUPP;
+
+ msg->offload = true;
+
+ ret = spi_optimize_message(spi, msg);
+ if (ret)
+ return ret;
+
+ mutex_lock(&ctlr->io_mutex);
+ ret = ctlr->offload_ops->prepare(spi, id, msg);
+ mutex_unlock(&ctlr->io_mutex);
+
+ if (ret) {
+ spi_unoptimize_message(msg);
+ msg->offload = false;
+ return ret;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(spi_offload_prepare);
+
+/**
+ * spi_offload_unprepare - releases any resources used by spi_offload_prepare()
+ * @spi: The same SPI device passed to spi_offload_prepare()
+ * @id: The same ID device passed to spi_offload_prepare()
+ * @msg: The same SPI message passed to spi_offload_prepare()
+ *
+ * Callers must ensure that the offload is no longer in use before calling this
+ * function, e.g. no in-progress transfers.
+ */
+void spi_offload_unprepare(struct spi_device *spi, unsigned int id,
+ struct spi_message *msg)
+{
+ struct spi_controller *ctlr = spi->controller;
+
+ if (!ctlr->offload_ops)
+ return;
+
+ mutex_lock(&ctlr->io_mutex);
+ ctlr->offload_ops->unprepare(spi, id);
+ mutex_unlock(&ctlr->io_mutex);
+
+ msg->offload = false;
+ msg->offload_state = NULL;
+
+ spi_unoptimize_message(msg);
+}
+EXPORT_SYMBOL_GPL(spi_offload_unprepare);
+
/*-------------------------------------------------------------------------*/
#if IS_ENABLED(CONFIG_OF_DYNAMIC)
diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
index e8e1e798924f..a8fc16c6bf37 100644
--- a/include/linux/spi/spi.h
+++ b/include/linux/spi/spi.h
@@ -31,6 +31,7 @@ struct spi_transfer;
struct spi_controller_mem_ops;
struct spi_controller_mem_caps;
struct spi_message;
+struct spi_controller_offload_ops;
/*
* INTERFACES between SPI master-side drivers and SPI slave protocol handlers,
@@ -500,6 +501,7 @@ extern struct spi_device *spi_new_ancillary_device(struct spi_device *spi, u8 ch
* This field is optional and should only be implemented if the
* controller has native support for memory like operations.
* @mem_caps: controller capabilities for the handling of memory operations.
+ * @offload_ops: operations for controllers with offload support.
* @unprepare_message: undo any work done by prepare_message().
* @slave_abort: abort the ongoing transfer request on an SPI slave controller
* @target_abort: abort the ongoing transfer request on an SPI target controller
@@ -745,6 +747,9 @@ struct spi_controller {
const struct spi_controller_mem_ops *mem_ops;
const struct spi_controller_mem_caps *mem_caps;
+ /* Operations for controllers with offload support. */
+ const struct spi_controller_offload_ops *offload_ops;
+
/* GPIO chip select */
struct gpio_desc **cs_gpiods;
bool use_gpio_descriptors;
@@ -1114,6 +1119,7 @@ struct spi_transfer {
* @pre_optimized: peripheral driver pre-optimized the message
* @optimized: the message is in the optimized state
* @prepared: spi_prepare_message was called for the this message
+ * @offload: message is to be used with offload hardware
* @status: zero for success, else negative errno
* @complete: called to report transaction completions
* @context: the argument to complete() when it's called
@@ -1123,6 +1129,7 @@ struct spi_transfer {
* @queue: for use by whichever driver currently owns the message
* @state: for use by whichever driver currently owns the message
* @opt_state: for use by whichever driver currently owns the message
+ * @offload_state: for use by whichever driver currently owns the message
* @resources: for resource management when the SPI message is processed
*
* A @spi_message is used to execute an atomic sequence of data transfers,
@@ -1151,6 +1158,8 @@ struct spi_message {
/* spi_prepare_message() was called for this message */
bool prepared;
+ /* spi_offload_prepare() was called on this message */
+ bool offload;
/*
* REVISIT: we might want a flag affecting the behavior of the
@@ -1183,6 +1192,11 @@ struct spi_message {
* __spi_optimize_message() and __spi_unoptimize_message().
*/
void *opt_state;
+ /*
+ * Optional state for use by controller driver between calls to
+ * offload_ops->prepare() and offload_ops->unprepare().
+ */
+ void *offload_state;
/* List of spi_res resources when the SPI message is processed */
struct list_head resources;
@@ -1546,6 +1560,49 @@ static inline ssize_t spi_w8r16be(struct spi_device *spi, u8 cmd)
/*---------------------------------------------------------------------------*/
+/*
+ * Offloading support.
+ *
+ * Some SPI controllers support offloading of SPI transfers. Essentially,
+ * this allows the SPI controller to record SPI transfers and then play them
+ * back later in one go via a single trigger.
+ */
+
+/**
+ * struct spi_controller_offload_ops - callbacks for offload support
+ *
+ * Drivers for hardware with offload support need to implement all of these
+ * callbacks.
+ */
+struct spi_controller_offload_ops {
+ /**
+ * @map_channel: Callback to reserve an offload instance for the given
+ * SPI device. If a SPI device requires more than one instance, then
+ * @id is used to differentiate between them. Channels must be unmapped
+ * in the struct spi_controller::cleanup() callback.
+ */
+ int (*map_channel)(struct spi_device *spi, unsigned int id,
+ unsigned int channel);
+ /**
+ * @prepare: Callback to prepare the offload for the given SPI message.
+ * @msg and any of its members (including any xfer->tx_buf) is not
+ * guaranteed to be valid beyond the lifetime of this call.
+ */
+ int (*prepare)(struct spi_device *spi, unsigned int id,
+ struct spi_message *msg);
+ /**
+ * @unprepare: Callback to release any resources used by prepare().
+ */
+ void (*unprepare)(struct spi_device *spi, unsigned int id);
+};
+
+extern int spi_offload_prepare(struct spi_device *spi, unsigned int id,
+ struct spi_message *msg);
+extern void spi_offload_unprepare(struct spi_device *spi, unsigned int id,
+ struct spi_message *msg);
+
+/*---------------------------------------------------------------------------*/
+
/*
* INTERFACE between board init code and SPI infrastructure.
*
--
2.43.2
Powered by blists - more mailing lists