[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <an4cvztdkqmrt7w2iaziihlxf4tbox65ze362v2lmycjnqg26y@jizjmh2ki34z>
Date: Wed, 29 Jan 2025 12:40:01 +0200
From: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
To: Ekansh Gupta <quic_ekangupt@...cinc.com>
Cc: srinivas.kandagatla@...aro.org, linux-arm-msm@...r.kernel.org,
gregkh@...uxfoundation.org, quic_bkumar@...cinc.com, linux-kernel@...r.kernel.org,
quic_chennak@...cinc.com, dri-devel@...ts.freedesktop.org, arnd@...db.de
Subject: Re: [PATCH v2 4/5] misc: fastrpc: Add polling mode support for
fastRPC driver
On Wed, Jan 29, 2025 at 11:12:16AM +0530, Ekansh Gupta wrote:
>
>
>
> On 1/29/2025 4:59 AM, Dmitry Baryshkov wrote:
> > On Mon, Jan 27, 2025 at 10:12:38AM +0530, Ekansh Gupta wrote:
> >> For any remote call to DSP, after sending an invocation message,
> >> fastRPC driver waits for glink response and during this time the
> >> CPU can go into low power modes. Adding a polling mode support
> >> with which fastRPC driver will poll continuously on a memory
> >> after sending a message to remote subsystem which will eliminate
> >> CPU wakeup and scheduling latencies and reduce fastRPC overhead.
> >> With this change, DSP always sends a glink response which will
> >> get ignored if polling mode didn't time out.
> > Is there a chance to implement actual async I/O protocol with the help
> > of the poll() call instead of hiding the polling / wait inside the
> > invoke2?
>
> This design is based on the implementation on DSP firmware as of today:
> Call flow: https://github.com/quic-ekangupt/fastrpc/blob/invokev2/Docs/invoke_v2.md#5-polling-mode
>
> Can you please give some reference to the async I/O protocol that you've
> suggested? I can check if it can be implemented here.
As with the typical poll() call implementation:
- write some data using ioctl
- call poll() / select() to wait for the data to be processed
- read data using another ioctl
Getting back to your patch. from you commit message it is not clear,
which SoCs support this feature. Reminding you that we are supporting
all kinds of platforms, including the ones that are EoLed by Qualcomm.
Next, you wrote that in-driver polling eliminates CPU wakeup and
scheduling. However this should also increase power consumption. Is
there any measurable difference in the latencies, granted that you
already use ioctl() syscall, as such there will be two context switches.
What is the actual impact?
--
With best wishes
Dmitry
Powered by blists - more mailing lists