[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <175129558329.1416517.3510577981745100807.b4-ty@kernel.org>
Date: Mon, 30 Jun 2025 15:59:43 +0100
From: Mark Brown <broonie@...nel.org>
To: linux-spi@...r.kernel.org, linux-kernel@...r.kernel.org,
Thangaraj Samynathan <thangaraj.s@...rochip.com>
Cc: unglinuxdriver@...rochip.com
Subject: Re: [PATCH v1 for-next] spi: spi-pci1xxxx: enable concurrent DMA
read/write across SPI transfers
On Mon, 30 Jun 2025 13:02:33 +0530, Thangaraj Samynathan wrote:
> Refactor the pci1xxxx SPI driver to allow overlapping DMA read and
> write operations across SPI transfers. This improves throughput and
> reduces idle time between SPI transactions.
>
> Transfer sequence:
> - Start with a DMA read to load TX data from host to device buffer.
> - After DMA read completes, trigger the SPI transfer.
> - On SPI completion:
> - Start DMA write to copy received data from RX buffer to host.
> - Start the next DMA read to prepare TX data for the following transfer.
> - Begin the next SPI transfer after both DMA write and read complete.
>
> [...]
Applied to
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git for-next
Thanks!
[1/1] spi: spi-pci1xxxx: enable concurrent DMA read/write across SPI transfers
commit: 7e1c28fbf235791cb5046fafdac5bc16fe8e788d
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
Powered by blists - more mailing lists