[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fb50ee6c-b8f9-6685-c4bd-43bcca5a1553@codeaurora.org>
Date: Tue, 10 Mar 2020 14:41:49 +0530
From: Maulik Shah <mkshah@...eaurora.org>
To: Doug Anderson <dianders@...omium.org>
Cc: Stephen Boyd <swboyd@...omium.org>,
Matthias Kaehlcke <mka@...omium.org>,
Evan Green <evgreen@...omium.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Andy Gross <agross@...nel.org>,
Rajendra Nayak <rnayak@...eaurora.org>,
Lina Iyer <ilina@...eaurora.org>, lsrao@...eaurora.org
Subject: Re: [PATCH v9 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty
caches
On 3/6/2020 3:48 AM, Doug Anderson wrote:
> Hi,
>
> On Thu, Mar 5, 2020 at 1:41 AM Maulik Shah <mkshah@...eaurora.org> wrote:
> >> There are other cases like below which also gets impacted if driver
>>>> don't cache anything...
>>>>
>>>> for example, when we don’t have dedicated ACTIVE TCS ( if we have below
>>>> config with ACTIVE TCS count 0)
>>>> qcom,tcs-config = <ACTIVE_TCS 0>,
>>>> <SLEEP_TCS 3>,
>>>> <WAKE_TCS 3>,
>>>>
>>>> Now to send active data, driver may re-use/ re-purpose few of the sleep
>>>> or wake TCS, to be used as ACTIVE TCS and once work is done,
>>>> it will be re-allocated in SLEEP/ WAKE TCS pool accordingly. If driver
>>>> don’t cache, all the SLEEP and WAKE data is lost when one
>>>> of TCS is repurposed to use as ACTIVE TCS.
>>> Ah, interesting. I'll read the code more, but are you expecting this
>>> type of situation to work today, or is it theoretical for the future?
>> yes, we have targets which needs to work with this type of situation.
> My brain is still slowly absorbing all the code, but something tells
> me that targets with no ACTIVE TCS will not work properly with non-OSI
> mode unless you change your patches more. Specifically to make the
> zero ACTIVE TCS case work I think you need a rpmh_flush() call after
> _ALL_ calls to rpmh_write() and rpmh_write_batch() (even those
> modifying ACTIVE state). rpmh_write_async() will be yet more
> interesting because you'd have to flush in rpmh_tx_done() I guess?
> ...and also somehow you need to inhibit entering sleep mode if an
> async write was in progress? Maybe easier to just detect the
> "non-OSI-mode + 0 ACTIVE TCS" case at probe time and fail to probe?
>
>
> -Doug
No, it shouldn’t break with "non-OSI-mode + 0 ACTIVE TCS"
After taking your suggestion to do rpmh start/end transaction in v13, rpmh_end_transaction()
invokes rpmh_flush() only for the last client and by this time expecting all of rpmh_write()
and rpmh_write_batch() will be already “finished” as client first waits for them to finish
and then only invokes end.
So driver is good to handle rpmh_write() and rpmh_write_batch() calls.
Regarding rpmh_write_async() call, which is a fire-n-forget request from SW and client driver
may immediately invoke rpmh_end_transaction() after this.
this case is also handled…
Lets again take an example for understanding this..
1. Client invokes rpmh_write_async() to send ACTIVE cmds for targets which has zero ACTIVE TCS
Rpmh driver Re-purposes one of SLEEP/WAKE TCS to use as ACTIVE, internally this also sets
drv->tcs_in_use to true for respective SLEEP/WAKE TCS.
2. Client now without waiting for above to finish, goes ahead and invokes rpmh_end_transaction()
which calls rpmh_flush() (in case cache become dirty)
Now if re-purposed TCS is still in use in HW (transaction in progress), we still have
drv->tcs_in_use set. So the rpmh_rsc_invalidate() (invoked from rpmh_flush()) will keep on
returning -EAGAIN until that TCS becomes free to use and then goes ahead to finish its job.
Thanks,
Maulik
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
Powered by blists - more mailing lists