[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190722194624.GA11589@codeaurora.org>
Date: Mon, 22 Jul 2019 13:46:24 -0600
From: Lina Iyer <ilina@...eaurora.org>
To: Stephen Boyd <swboyd@...omium.org>
Cc: andy.gross@...aro.org, bjorn.andersson@...aro.org,
linux-arm-msm@...r.kernel.org, linux-soc@...r.kernel.org,
rnayak@...eaurora.org, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org, dianders@...omium.org,
mkshah@...eaurora.org, "Raju P.L.S.S.S.N" <rplsssn@...eaurora.org>
Subject: Re: [PATCH 1/2] drivers: qcom: rpmh-rsc: simplify TCS locking
On Mon, Jul 22 2019 at 12:18 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2019-07-22 09:20:03)
>> On Fri, Jul 19 2019 at 12:20 -0600, Stephen Boyd wrote:
>> >Quoting Lina Iyer (2019-07-01 08:29:06)
>> >> From: "Raju P.L.S.S.S.N" <rplsssn@...eaurora.org>
>> >>
>> >> tcs->lock was introduced to serialize access with in TCS group. But
>> >> even without tcs->lock, drv->lock is serving the same purpose. So
>> >> use a single drv->lock.
>> >
>> >Isn't the downside now that we're going to be serializing access to the
>> >different TCSes when two are being written in parallel or waited on? I
>> >thought that was the whole point of splitting the lock into a TCS lock
>> >and a general "driver" lock that protects the global driver state vs.
>> >the specific TCS state.
>> >
>> Yes but we were holding the drv->lock as well as tcs->lock for the most
>> critical of the path anyways (writing to TCS). The added complexity
>> doesn't seem to help reduce the latency that it expected to reduce.
>
>Ok. That sort of information should be in the commit text to explain why
>it's not helping with reducing the latency or throughput of the API.
>
Will add.
--Lina
Powered by blists - more mailing lists