[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5d35fdfb.1c69fb81.5fafa.aaa9@mx.google.com>
Date: Mon, 22 Jul 2019 11:18:34 -0700
From: Stephen Boyd <swboyd@...omium.org>
To: Lina Iyer <ilina@...eaurora.org>
Cc: andy.gross@...aro.org, bjorn.andersson@...aro.org,
linux-arm-msm@...r.kernel.org, linux-soc@...r.kernel.org,
rnayak@...eaurora.org, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org, dianders@...omium.org,
mkshah@...eaurora.org, "Raju P.L.S.S.S.N" <rplsssn@...eaurora.org>
Subject: Re: [PATCH 1/2] drivers: qcom: rpmh-rsc: simplify TCS locking
Quoting Lina Iyer (2019-07-22 09:20:03)
> On Fri, Jul 19 2019 at 12:20 -0600, Stephen Boyd wrote:
> >Quoting Lina Iyer (2019-07-01 08:29:06)
> >> From: "Raju P.L.S.S.S.N" <rplsssn@...eaurora.org>
> >>
> >> tcs->lock was introduced to serialize access with in TCS group. But
> >> even without tcs->lock, drv->lock is serving the same purpose. So
> >> use a single drv->lock.
> >
> >Isn't the downside now that we're going to be serializing access to the
> >different TCSes when two are being written in parallel or waited on? I
> >thought that was the whole point of splitting the lock into a TCS lock
> >and a general "driver" lock that protects the global driver state vs.
> >the specific TCS state.
> >
> Yes but we were holding the drv->lock as well as tcs->lock for the most
> critical of the path anyways (writing to TCS). The added complexity
> doesn't seem to help reduce the latency that it expected to reduce.
Ok. That sort of information should be in the commit text to explain why
it's not helping with reducing the latency or throughput of the API.
Powered by blists - more mailing lists