[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180912022204.GI15710@codeaurora.org>
Date: Tue, 11 Sep 2018 20:22:04 -0600
From: Lina Iyer <ilina@...eaurora.org>
To: Matthias Kaehlcke <mka@...omium.org>
Cc: Raju P L S S S N <rplsssn@...eaurora.org>, andy.gross@...aro.org,
david.brown@...aro.org, linux-arm-msm@...r.kernel.org,
linux-soc@...r.kernel.org, rnayak@...eaurora.org,
bjorn.andersson@...aro.org, linux-kernel@...r.kernel.org,
sboyd@...nel.org, evgreen@...omium.org, dianders@...omium.org
Subject: Re: [PATCH v2 3/6] drivers: qcom: rpmh: disallow active requests in
solver mode
On Tue, Sep 11 2018 at 17:02 -0600, Matthias Kaehlcke wrote:
>Hi Raju/Lina,
>
>On Fri, Jul 27, 2018 at 03:34:46PM +0530, Raju P L S S S N wrote:
>> From: Lina Iyer <ilina@...eaurora.org>
>>
>> Controllers may be in 'solver' state, where they could be in autonomous
>> mode executing low power modes for their hardware and as such are not
>> available for sending active votes. Device driver may notify RPMH API
>> that the controller is in solver mode and when in such mode, disallow
>> requests from platform drivers for state change using the RSC.
>>
>> Signed-off-by: Lina Iyer <ilina@...eaurora.org>
>> Signed-off-by: Raju P.L.S.S.S.N <rplsssn@...eaurora.org>
>> ---
>> drivers/soc/qcom/rpmh-internal.h | 2 ++
>> drivers/soc/qcom/rpmh.c | 59 ++++++++++++++++++++++++++++++++++++++++
>> include/soc/qcom/rpmh.h | 5 ++++
>> 3 files changed, 66 insertions(+)
>>
>> diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
>> index 4ff43bf..6cd2f78 100644
>> --- a/drivers/soc/qcom/rpmh-internal.h
>> +++ b/drivers/soc/qcom/rpmh-internal.h
>> @@ -72,12 +72,14 @@ struct rpmh_request {
>> * @cache_lock: synchronize access to the cache data
>> * @dirty: was the cache updated since flush
>> * @batch_cache: Cache sleep and wake requests sent as batch
>> + * @in_solver_mode: Controller is busy in solver mode
>> */
>> struct rpmh_ctrlr {
>> struct list_head cache;
>> spinlock_t cache_lock;
>> bool dirty;
>> struct list_head batch_cache;
>> + bool in_solver_mode;
>> };
>>
>> /**
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index 2382276..0d276fd 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -5,6 +5,7 @@
>>
>> #include <linux/atomic.h>
>> #include <linux/bug.h>
>> +#include <linux/delay.h>
>> #include <linux/interrupt.h>
>> #include <linux/jiffies.h>
>> #include <linux/kernel.h>
>> @@ -75,6 +76,50 @@ static struct rpmh_ctrlr *get_rpmh_ctrlr(const struct device *dev)
>> return &drv->client;
>> }
>>
>> +static int check_ctrlr_state(struct rpmh_ctrlr *ctrlr, enum rpmh_state state)
>> +{
>> + unsigned long flags;
>> + int ret = 0;
>> +
>> + /* Do not allow setting active votes when in solver mode */
>> + spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> + if (ctrlr->in_solver_mode && state == RPMH_ACTIVE_ONLY_STATE)
>> + ret = -EBUSY;
>> + spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> +
>> + return ret;
>> +}
>> +
>> +/**
>> + * rpmh_mode_solver_set: Indicate that the RSC controller hardware has
>> + * been configured to be in solver mode
>> + *
>> + * @dev: the device making the request
>> + * @enable: Boolean value indicating if the controller is in solver mode.
>> + *
>> + * When solver mode is enabled, passthru API will not be able to send wake
>> + * votes, just awake and active votes.
>> + */
>> +int rpmh_mode_solver_set(const struct device *dev, bool enable)
>> +{
>> + struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
>> + unsigned long flags;
>> +
>> + for (;;) {
>> + spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> + if (rpmh_rsc_ctrlr_is_idle(ctrlr_to_drv(ctrlr))) {
>> + ctrlr->in_solver_mode = enable;
>
>As commented on '[v2,1/6] drivers: qcom: rpmh-rsc: return if the
>controller is idle', this seems potentially
>racy. _is_idle() could report the controller as idle, even though some
>TCSes are in use (after _is_idle() visited them).
>
>Additional locking may be needed or a comment if this situation should
>never happen on a sane system (I don't know enough about RPMh and its
>clients to judge if this is the case).
Hmm.. Forgot that we call from here. May be a lock might be helpful.
-- Lina
Powered by blists - more mailing lists