[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <159249892720.8894.5843182459934461610.b4-ty@kernel.org>
Date: Thu, 18 Jun 2020 17:48:52 +0100
From: Mark Brown <broonie@...nel.org>
To: Douglas Anderson <dianders@...omium.org>
Cc: Bjorn Andersson <bjorn.andersson@...aro.org>,
Alok Chauhan <alokc@...eaurora.org>, skakit@...eaurora.org,
linux-kernel@...r.kernel.org, linux-spi@...r.kernel.org,
Dilip Kota <dkota@...eaurora.org>,
Andy Gross <agross@...nel.org>, linux-arm-msm@...r.kernel.org,
swboyd@...omium.org
Subject: Re: [PATCH v3 0/5] spi: spi-geni-qcom: Fixes / perf improvements
On Tue, 16 Jun 2020 03:40:45 -0700, Douglas Anderson wrote:
> This patch series is a new version of the previous patch posted:
> [PATCH v2] spi: spi-geni-qcom: Speculative fix of "nobody cared" about interrupt
> https://lore.kernel.org/r/20200317133653.v2.1.I752ebdcfd5e8bf0de06d66e767b8974932b3620e@changeid
>
> At this point I've done enough tracing to know that there was a real
> race in the old code (not just weakly ordered memory problems) and
> that should be fixed with the locking patches.
>
> [...]
Applied to
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git for-next
Thanks!
[1/1] spi: spi-geni-qcom: No need for irqsave variant of spinlock calls
commit: 539afdf969d6ad7ded543d9abf14596aec411fe9
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
Powered by blists - more mailing lists