[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9fbf9226-578a-90aa-693d-9ea4fcda8281@codeaurora.org>
Date: Mon, 1 Jul 2019 13:07:44 -0600
From: Jeffrey Hugo <jhugo@...eaurora.org>
To: Rob Clark <robdclark@...il.com>
Cc: dri-devel <dri-devel@...ts.freedesktop.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
freedreno <freedreno@...ts.freedesktop.org>,
aarch64-laptops@...ts.linaro.org,
linux-clk <linux-clk@...r.kernel.org>,
Linux PM <linux-pm@...r.kernel.org>,
Rob Clark <robdclark@...omium.org>,
Sean Paul <sean@...rly.run>, David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
Jordan Crouse <jcrouse@...eaurora.org>,
Abhinav Kumar <abhinavk@...eaurora.org>,
Sibi Sankar <sibis@...eaurora.org>,
Mamta Shukla <mamtashukla555@...il.com>,
Chandan Uddaraju <chandanu@...eaurora.org>,
Archit Taneja <architt@...eaurora.org>,
Rajesh Yadav <ryadav@...eaurora.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 4/5] drm/msm/dsi: get the clocks into OFF state at init
On 7/1/2019 12:58 PM, Rob Clark wrote:
> On Mon, Jul 1, 2019 at 11:37 AM Jeffrey Hugo <jhugo@...eaurora.org> wrote:
>>
>> On 6/30/2019 9:01 AM, Rob Clark wrote:
>>> From: Rob Clark <robdclark@...omium.org>
>>>
>>> Do an extra enable/disable cycle at init, to get the clks into disabled
>>> state in case bootloader left them enabled.
>>>
>>> In case they were already enabled, the clk_prepare_enable() has no real
>>> effect, other than getting the enable_count/prepare_count into the right
>>> state so that we can disable clocks in the correct order. This way we
>>> avoid having stuck clocks when we later want to do a modeset and set the
>>> clock rates.
>>>
>>> Signed-off-by: Rob Clark <robdclark@...omium.org>
>>> ---
>>> drivers/gpu/drm/msm/dsi/dsi_host.c | 18 +++++++++++++++---
>>> drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c | 1 +
>>> 2 files changed, 16 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c
>>> index aabab6311043..d0172d8db882 100644
>>> --- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c
>>> +++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_10nm.c
>>> @@ -354,6 +354,7 @@ static int dsi_pll_10nm_lock_status(struct dsi_pll_10nm *pll)
>>> if (rc)
>>> pr_err("DSI PLL(%d) lock failed, status=0x%08x\n",
>>> pll->id, status);
>>> +rc = 0; // HACK, this will fail if PLL already running..
>>
>> Umm, why? Is this intentional?
>>
>
> I need to sort out a proper solution for this.. but PLL lock will fail
> if the clk is already running (which, in that case, is fine since it
> is already running and locked), which will cause the clk_enable to
> fail..
>
> I guess there is some way that I can check that clk is already running
> and skip this check..
I'm sorry, but this makes no sense to me. What clock are we talking
about here?
If the pll is locked, the the lock check should just drop through. If
the pll cannot lock, you have an issue. I'm confused as to how any of
the downstream clocks can actually be running if the pll isn't locked.
I feel like we are not yet on the same page about what situation you
seem to be in. Can you describe in exacting detail?
Powered by blists - more mailing lists