lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 1 Nov 2022 17:45:03 -0700
From:   Doug Anderson <dianders@...omium.org>
To:     Stephen Boyd <swboyd@...omium.org>
Cc:     Michael Turquette <mturquette@...libre.com>,
        Stephen Boyd <sboyd@...nel.org>, linux-kernel@...r.kernel.org,
        linux-clk@...r.kernel.org, patches@...ts.linux.dev,
        Andy Gross <agross@...nel.org>,
        Bjorn Andersson <andersson@...nel.org>,
        Konrad Dybcio <konrad.dybcio@...ainline.org>,
        linux-arm-msm@...r.kernel.org,
        Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
        Johan Hovold <johan+linaro@...nel.org>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        Taniya Das <quic_tdas@...cinc.com>,
        Satya Priya <quic_c_skakit@...cinc.com>,
        Matthias Kaehlcke <mka@...omium.org>
Subject: Re: [PATCH] clk: qcom: gdsc: Remove direct runtime PM calls

Hi,

On Tue, Nov 1, 2022 at 4:34 PM Stephen Boyd <swboyd@...omium.org> wrote:
>
> We shouldn't be calling runtime PM APIs from within the genpd
> enable/disable path for a couple reasons.
>
> First, this causes an AA lockdep splat because genpd can call into genpd
> code again while holding the genpd lock.
>
> WARNING: possible recursive locking detected
> 5.19.0-rc2-lockdep+ #7 Not tainted
> --------------------------------------------
> kworker/2:1/49 is trying to acquire lock:
> ffffffeea0370788 (&genpd->mlock){+.+.}-{3:3}, at: genpd_lock_mtx+0x24/0x30
>
> but task is already holding lock:
> ffffffeea03710a8 (&genpd->mlock){+.+.}-{3:3}, at: genpd_lock_mtx+0x24/0x30
>
> other info that might help us debug this:
>  Possible unsafe locking scenario:
>
>        CPU0
>        ----
>   lock(&genpd->mlock);
>   lock(&genpd->mlock);
>
>  *** DEADLOCK ***
>
>  May be due to missing lock nesting notation
>
> 3 locks held by kworker/2:1/49:
>  #0: 74ffff80811a5748 ((wq_completion)pm){+.+.}-{0:0}, at: process_one_work+0x320/0x5fc
>  #1: ffffffc008537cf8 ((work_completion)(&genpd->power_off_work)){+.+.}-{0:0}, at: process_one_work+0x354/0x5fc
>  #2: ffffffeea03710a8 (&genpd->mlock){+.+.}-{3:3}, at: genpd_lock_mtx+0x24/0x30
>
> stack backtrace:
> CPU: 2 PID: 49 Comm: kworker/2:1 Not tainted 5.19.0-rc2-lockdep+ #7
> Hardware name: Google Lazor (rev3 - 8) with KB Backlight (DT)
> Workqueue: pm genpd_power_off_work_fn
> Call trace:
>  dump_backtrace+0x1a0/0x200
>  show_stack+0x24/0x30
>  dump_stack_lvl+0x7c/0xa0
>  dump_stack+0x18/0x44
>  __lock_acquire+0xb38/0x3634
>  lock_acquire+0x180/0x2d4
>  __mutex_lock_common+0x118/0xe30
>  mutex_lock_nested+0x70/0x7c
>  genpd_lock_mtx+0x24/0x30
>  genpd_runtime_suspend+0x2f0/0x414
>  __rpm_callback+0xdc/0x1b8
>  rpm_callback+0x4c/0xcc
>  rpm_suspend+0x21c/0x5f0
>  rpm_idle+0x17c/0x1e0
>  __pm_runtime_idle+0x78/0xcc
>  gdsc_disable+0x24c/0x26c
>  _genpd_power_off+0xd4/0x1c4
>  genpd_power_off+0x2d8/0x41c
>  genpd_power_off_work_fn+0x60/0x94
>  process_one_work+0x398/0x5fc
>  worker_thread+0x42c/0x6c4
>  kthread+0x194/0x1b4
>  ret_from_fork+0x10/0x20
>
> Second, this confuses runtime PM on CoachZ for the camera devices by
> causing the camera clock controller's runtime PM usage_count to go
> negative after resuming from suspend. This is because runtime PM is
> being used on the clock controller while runtime PM is disabled for the
> device.
>
> The reason for the negative count is because a GDSC is represented as a
> genpd and each genpd that is attached to a device is resumed during the
> noirq phase of system wide suspend/resume (see the noirq suspend ops
> assignment in pm_genpd_init() for more details). The camera GDSCs are
> attached to camera devices with the 'power-domains' property in DT.
> Every device has runtime PM disabled in the late system suspend phase
> via __device_suspend_late(). Runtime PM is not usable until runtime PM
> is enabled in device_resume_early(). The noirq phases run after the
> 'late' and before the 'early' phase of suspend/resume. When the genpds
> are resumed in genpd_resume_noirq(), we call down into gdsc_enable()
> that calls pm_runtime_resume_and_get() and that returns -EACCES to
> indicate failure to resume because runtime PM is disabled for all
> devices.
>
> Upon closer inspection, calling runtime PM APIs like this in the GDSC
> driver doesn't make sense. It was intended to make sure the GDSC for the
> clock controller providing other GDSCs was enabled, specifically the
> MMCX GDSC for the display clk controller on SM8250 (sm8250-dispcc), so
> that GDSC register accesses succeeded. That will already happen because
> we make the 'dev->pm_domain' a parent domain of each GDSC we register in
> gdsc_register() via pm_genpd_add_subdomain(). When any of these GDSCs
> are accessed, we'll enable the parent domain (in this specific case
> MMCX).
>
> We also remove any getting of runtime PM during registration, because
> when a genpd is registered it increments the count on the parent if the
> genpd itself is already enabled. And finally, the runtime PM state of
> the clk controller registering the GDSC shouldn't matter to the
> subdomain setup. Therefore we always assign 'dev' unconditionally so
> when GDSCs are removed we properly unlink the GDSC from the clk
> controller's pm_domain.
>
> Cc: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
> Cc: Johan Hovold <johan+linaro@...nel.org>
> Cc: Ulf Hansson <ulf.hansson@...aro.org>
> Cc: Taniya Das <quic_tdas@...cinc.com>
> Cc: Satya Priya <quic_c_skakit@...cinc.com>
> Cc: Douglas Anderson <dianders@...omium.org>
> Cc: Matthias Kaehlcke <mka@...omium.org>
> Reported-by: Stephen Boyd <swboyd@...omium.org>
> Fixes: 1b771839de05 ("clk: qcom: gdsc: enable optional power domain support")
> Signed-off-by: Stephen Boyd <swboyd@...omium.org>
> ---
>  drivers/clk/qcom/gdsc.c | 64 ++++++-----------------------------------
>  1 file changed, 8 insertions(+), 56 deletions(-)

One small nit is that the kernel doc for "@dev" in "struct gdsc" is
incorrect after your patch. It still says this even though we're not
using it for pm_runtime calls anymore:

 * @dev: the device holding the GDSC, used for pm_runtime calls

Other than that, this seems OK to me. I don't feel like I have a lot
of good intuition around PM Clocks and genpd and all the topics talked
about here, but I tried to look at the diff from before all the
"recent" patches to "drivers/clk/qcom/gdsc.c" till the state after
your patch. In other words the combined diff of these 4 patches:

clk: qcom: gdsc: Remove direct runtime PM calls
clk: qcom: gdsc: add missing error handling
clk: qcom: gdsc: Bump parent usage count when GDSC is found enabled
clk: qcom: gdsc: enable optional power domain support

That basically shows a combined change that does two things:

a) Adds error handling if pm_genpd_init() returns an error.

b) Says that if "scs[i]->parent" wasn't provided that we can imply a
parent from "dev->pm_domain".

That seems to make sense, but one thing I'm wondering about for "b)"
is how you know that "dev->pm_domain" can be safely upcast to a genpd.
In other words, I'm hesitant about the "pd_to_genpd(dev->pm_domain)"
call. I'll assume that "dev->pm_domain" isn't 100% guaranteed to be a
genpd or else (presumably) we would have stored a genpd. Is there
something about the "dev" that's passed in with "struct gdsc_desc"
that gives the stronger guarantee about this being a genpd?


In any case, I will note that this seems to make the hang that I
described [1] go away. I never totally dug into why the patch was
tickling it, but I'm happy for now that it's back to not reproducing.
:-)


[1] https://lore.kernel.org/r/20220922154354.2486595-1-dianders@chromium.org


-Doug

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ