lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <07c29682-41d7-5624-b08a-35dd0c223d1e@linaro.org>
Date:   Mon, 4 Jul 2022 19:09:43 +0300
From:   Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
To:     Maulik Shah <quic_mkshah@...cinc.com>, bjorn.andersson@...aro.org,
        ulf.hansson@...aro.org
Cc:     linux-arm-msm@...r.kernel.org, linux-pm@...r.kernel.org,
        linux-kernel@...r.kernel.org, rafael@...nel.org,
        daniel.lezcano@...aro.org, quic_lsrao@...cinc.com,
        quic_rjendra@...cinc.com
Subject: Re: [PATCH v2 0/6] Add APSS RSC to Cluster power domain

On 11/05/2022 16:16, Maulik Shah wrote:
> Changes in v2:
> - First four changes from v1 are already in linux-next, drop them
> - Update dt-bindings change to yaml format
> - Address Ulf's comments from v1 patches
> 
> This series patches 1 to 4 adds/corrects the cpuidle states/
> apps_rsc TCS configuration to make it same as downstream kernel.
> 
> The patches 5, 6 and 7 adds apps_rsc device to cluster power domain such
> that when cluster is going to power down the cluster pre off notification
> will program the 'sleep' and 'wake' votes in SLEEP TCS and WAKE TCSes.
> 
> The patches 8, 9 and 10 are to program the next wakeup in CONTROL_TCS.
> 
> [1], [2] was older way of programming CONTROL_TCS (exporting an API and
> calling when last CPU was entering deeper low power mode). Now with patch
> number 5,6 and 7 the apps RSC is added to to cluster power domain and hence
> these patches are no longer needed with this series.
> 
> The series is tested on SM8250 with latest linux-next tag next-20220107.
> 
> [1] https://patchwork.kernel.org/project/linux-arm-msm/patch/20190218140210.14631-3-rplsssn@codeaurora.org/
> [2] https://patchwork.kernel.org/project/linux-arm-msm/list/?series=59613

Tested-by: Dmitry Baryshkov <dmitry.baryshkov@...aro.org> # SM8450

Also please note, that these patches fix the regression on sm8[1234]50, 
which dates back to 5.18 (because the dts parts were merged at that 
point). Amit has responded rpmh clock timeouts on RB5. On SM8450 we 
observed random board stalls. Could you please describe this in the 
cover letter and follow the process described in stable-kernel-rules.rst 
to get these patches backported into 5.18/5.19. It would be critical to 
get them in through the stable queue.

> 
> Lina Iyer (1):
>    soc: qcom: rpmh-rsc: Attach RSC to cluster PM domain
> 
> Maulik Shah (5):
>    dt-bindings: soc: qcom: Update devicetree binding document for
>      rpmh-rsc
>    arm64: dts: qcom: Add power-domains property for apps_rsc
>    PM: domains: Store the closest hrtimer event of the domain CPUs
>    soc: qcom: rpmh-rsc: Save base address of drv
>    soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup
> 
>   .../bindings/soc/qcom/qcom,rpmh-rsc.yaml           |   5 +
>   arch/arm64/boot/dts/qcom/sm8150.dtsi               |   1 +
>   arch/arm64/boot/dts/qcom/sm8250.dtsi               |   1 +
>   arch/arm64/boot/dts/qcom/sm8350.dtsi               |   1 +
>   arch/arm64/boot/dts/qcom/sm8450.dtsi               |   1 +
>   drivers/base/power/domain.c                        |  24 ++++
>   drivers/base/power/domain_governor.c               |   1 +
>   drivers/soc/qcom/rpmh-internal.h                   |   9 +-
>   drivers/soc/qcom/rpmh-rsc.c                        | 146 +++++++++++++++++++--
>   drivers/soc/qcom/rpmh.c                            |   4 +-
>   include/linux/pm_domain.h                          |   7 +
>   11 files changed, 184 insertions(+), 16 deletions(-)
> 


-- 
With best wishes
Dmitry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ