[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAK8P3a3inEwKtfNhD4eT7m2t5eVLMcR5HJWR6-kzO__6F=U_tw@mail.gmail.com>
Date: Thu, 15 Nov 2018 06:48:36 -0800
From: Arnd Bergmann <arnd@...db.de>
To: Alex Elder <elder@...aro.org>
Cc: David Miller <davem@...emloft.net>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Networking <netdev@...r.kernel.org>,
DTML <devicetree@...r.kernel.org>, linux-arm-msm@...r.kernel.org,
linux-soc@...r.kernel.org,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
syadagir@...eaurora.org, mjavid@...eaurora.org,
Rob Herring <robh+dt@...nel.org>,
Mark Rutland <mark.rutland@....com>
Subject: Re: [RFC PATCH 10/12] soc: qcom: ipa: data path
On Wed, Nov 14, 2018 at 7:31 PM Alex Elder <elder@...aro.org> wrote:
>
> On 11/7/18 8:55 AM, Arnd Bergmann wrote:
> > On Wed, Nov 7, 2018 at 1:33 AM Alex Elder <elder@...aro.org> wrote:
> >>
> >> This patch contains "ipa_dp.c", which includes the bulk of the data
> >> path code. There is an overview in the code of how things operate,
> >> but there are already plans to rework this portion of the driver.
> >>
> >> In particular:
> >> - Interrupt handling will be replaced with a threaded interrupt
> >> handler. Currently handling occurs in a combination of
> >> interrupt and workqueue context, and this requires locking
> >> and atomic operations for proper synchronization.
> >
> > You probably don't want to use just a threaded IRQ handler to
> > start the poll function, that would still require an extra indirection.
>
> That's a really good point. However I think that the path I'll
> take to *getting* to scheduling the poll in interrupt context
> will use a threaded interrupt handler. I'm hoping that will
> allow me to simplify the code in steps.
>
> The main reason for this split between working in interrupt
> context when possible, but pushing to a workqueue when not, is
> to allow IPA clock(s) to be turned off. Enabling the clocks
> is a blocking operation, so can't' be done in the top half
> interrupt handler. The thought was it would be best to work
> in interrupt context--if the clock was already active--but
> to defer to a workqueue to turn the clock on if necessary.
>
> The result requires locking and duplication of code that I
> find to be pretty confusing--and hard to reason about. I
> have been planning to re-do things to be better suited to
> NAPI, and knowing that, I haven't given the data path as
> much attention as some of the rest.
Right, that sounds like a good plan: start making it use a
threaded IRQ handler first to clean up the code, and then
think about optimizing the NAPI wakeup once that works
reliably.
I think what you can do here eventually is to have
a combined threaded/non-threaded IRQ handler, where
the threaded handler can do everything it needs to do,
and the non-threaded handler does only one thing:
if all conditions are met for entering the NAPI handler
(waiting for rx/tx IRQ, clocks enabled, ...) we call
napi_schedule(), otherwise defer to the threaded handler.
Arnd
Powered by blists - more mailing lists