[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221201225705.46r2m35ketvzipox@builder.lan>
Date: Thu, 1 Dec 2022 16:57:05 -0600
From: Bjorn Andersson <andersson@...nel.org>
To: Akhil P Oommen <quic_akhilpo@...cinc.com>,
Ulf Hansson <ulf.hansson@...aro.org>
Cc: freedreno <freedreno@...ts.freedesktop.org>,
dri-devel@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
Rob Clark <robdclark@...il.com>,
Stephen Boyd <swboyd@...omium.org>,
Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
Philipp Zabel <p.zabel@...gutronix.de>,
Douglas Anderson <dianders@...omium.org>,
krzysztof.kozlowski@...aro.org,
Abhinav Kumar <quic_abhinavk@...cinc.com>,
Andy Gross <agross@...nel.org>,
Daniel Vetter <daniel@...ll.ch>,
David Airlie <airlied@...ux.ie>,
Konrad Dybcio <konrad.dybcio@...ainline.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Michael Turquette <mturquette@...libre.com>,
Rob Herring <robh+dt@...nel.org>, Sean Paul <sean@...rly.run>,
Stephen Boyd <sboyd@...nel.org>, devicetree@...r.kernel.org,
linux-clk@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 0/6] clk/qcom: Support gdsc collapse polling using
'reset' interface
On Wed, Oct 05, 2022 at 02:36:58PM +0530, Akhil P Oommen wrote:
>
@Ulf, Akhil has a power-domain for a piece of hardware which may be
voted active by multiple different subsystems (co-processors/execution
contexts) in the system.
As such, during the powering down sequence we don't wait for the
power-domain to turn off. But in the event of an error, the recovery
mechanism relies on waiting for the hardware to settle in a powered off
state.
The proposal here is to use the reset framework to wait for this state
to be reached, before continuing with the recovery mechanism in the
client driver.
Given our other discussions on quirky behavior, do you have any
input/suggestions on this?
> Some clients like adreno gpu driver would like to ensure that its gdsc
> is collapsed at hardware during a gpu reset sequence. This is because it
> has a votable gdsc which could be ON due to a vote from another subsystem
> like tz, hyp etc or due to an internal hardware signal. To allow
> this, gpucc driver can expose an interface to the client driver using
> reset framework. Using this the client driver can trigger a polling within
> the gdsc driver.
@Akhil, this description is fairly generic. As we've reached the state
where the hardware has settled and we return to the client, what
prevents it from being powered up again?
Or is it simply a question of it hitting the powered-off state, not
necessarily staying there?
Regards,
Bjorn
>
> This series is rebased on top of qcom/linux:for-next branch.
>
> Related discussion: https://patchwork.freedesktop.org/patch/493144/
>
> Changes in v7:
> - Update commit message (Bjorn)
> - Rebased on top of qcom/linux:for-next branch.
>
> Changes in v6:
> - No code changes in this version. Just captured the Acked-by tags
>
> Changes in v5:
> - Nit: Remove a duplicate blank line (Krzysztof)
>
> Changes in v4:
> - Update gpu dt-binding schema
> - Typo fix in commit text
>
> Changes in v3:
> - Use pointer to const for "struct qcom_reset_ops" in qcom_reset_map (Krzysztof)
>
> Changes in v2:
> - Return error when a particular custom reset op is not implemented. (Dmitry)
>
> Akhil P Oommen (6):
> dt-bindings: clk: qcom: Support gpu cx gdsc reset
> clk: qcom: Allow custom reset ops
> clk: qcom: gdsc: Add a reset op to poll gdsc collapse
> clk: qcom: gpucc-sc7280: Add cx collapse reset support
> dt-bindings: drm/msm/gpu: Add optional resets
> arm64: dts: qcom: sc7280: Add Reset support for gpu
>
> .../devicetree/bindings/display/msm/gpu.yaml | 6 +++++
> arch/arm64/boot/dts/qcom/sc7280.dtsi | 3 +++
> drivers/clk/qcom/gdsc.c | 23 ++++++++++++++----
> drivers/clk/qcom/gdsc.h | 7 ++++++
> drivers/clk/qcom/gpucc-sc7280.c | 10 ++++++++
> drivers/clk/qcom/reset.c | 27 +++++++++++++++++++++-
> drivers/clk/qcom/reset.h | 8 +++++++
> include/dt-bindings/clock/qcom,gpucc-sc7280.h | 3 +++
> 8 files changed, 82 insertions(+), 5 deletions(-)
>
> --
> 2.7.4
>
Powered by blists - more mailing lists