[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d76424f3-965e-abaa-9622-185eff94dfe9@codeaurora.org>
Date: Tue, 31 Mar 2020 13:56:00 +0530
From: Maulik Shah <mkshah@...eaurora.org>
To: Doug Anderson <dianders@...omium.org>
Cc: Stephen Boyd <swboyd@...omium.org>,
Evan Green <evgreen@...omium.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Andy Gross <agross@...nel.org>,
Matthias Kaehlcke <mka@...omium.org>,
Rajendra Nayak <rnayak@...eaurora.org>,
Lina Iyer <ilina@...eaurora.org>, lsrao@...eaurora.org
Subject: Re: [PATCH v14 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty
caches
Hi,
On 3/27/2020 11:52 PM, Doug Anderson wrote:
> Hi,
>
> On Fri, Mar 27, 2020 at 4:00 AM Maulik Shah <mkshah@...eaurora.org> wrote:
>> * @ctrlr: controller making request to flush cached data
>> *
>> - * Return: -EBUSY if the controller is busy, probably waiting on a response
>> - * to a RPMH request sent earlier.
>> + * Return: 0 on success, error number otherwise.
>> *
>> - * This function is always called from the sleep code from the last CPU
>> - * that is powering down the entire system. Since no other RPMH API would be
>> - * executing at this time, it is safe to run lockless.
>> + * This function can either be called from sleep code on the last CPU
>> + * (thus no spinlock needed) or with the ctrlr->cache_lock already held.
>>
>> Now you can remove the "or with the ctrlr->cache_lock already held"
>> since it's no longer true.
>>
>> It can be true for other RSCs, so i kept as it is.
> I don't really understand this. The cache_lock is only a concept in
> "rpmh.c". How could another RSC grab the cache lock? If nothing
> else, can you remove this comment until support for those other RSCs
> are added and we can evaluate then?
>
> -Doug
Okay i will remove this comment until support for other RSCs are added.
Thanks,
Maulik
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
Powered by blists - more mailing lists