lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Jul 2022 02:08:32 +0530
From:   Akhil P Oommen <quic_akhilpo@...cinc.com>
To:     Rob Clark <robdclark@...il.com>
CC:     Doug Anderson <dianders@...omium.org>, Sean Paul <sean@...rly.run>,
        Jonathan Marek <jonathan@...ek.ca>,
        David Airlie <airlied@...ux.ie>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        Konrad Dybcio <konrad.dybcio@...ainline.org>,
        Abhinav Kumar <quic_abhinavk@...cinc.com>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Matthias Kaehlcke <mka@...omium.org>,
        "Daniel Vetter" <daniel@...ll.ch>,
        Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
        Jordan Crouse <jordan@...micpenguin.net>,
        freedreno <freedreno@...ts.freedesktop.org>,
        Chia-I Wu <olvaffe@...il.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [Freedreno] [PATCH v2 3/7] drm/msm: Fix cx collapse issue during
 recovery

On 7/20/2022 11:36 PM, Rob Clark wrote:
> On Tue, Jul 12, 2022 at 12:15 PM Akhil P Oommen
> <quic_akhilpo@...cinc.com> wrote:
>> On 7/12/2022 10:14 PM, Rob Clark wrote:
>>> On Mon, Jul 11, 2022 at 10:05 PM Akhil P Oommen
>>> <quic_akhilpo@...cinc.com> wrote:
>>>> On 7/12/2022 4:52 AM, Doug Anderson wrote:
>>>>> Hi,
>>>>>
>>>>> On Fri, Jul 8, 2022 at 11:00 PM Akhil P Oommen <quic_akhilpo@...cinc.com> wrote:
>>>>>> There are some hardware logic under CX domain. For a successful
>>>>>> recovery, we should ensure cx headswitch collapses to ensure all the
>>>>>> stale states are cleard out. This is especially true to for a6xx family
>>>>>> where we can GMU co-processor.
>>>>>>
>>>>>> Currently, cx doesn't collapse due to a devlink between gpu and its
>>>>>> smmu. So the *struct gpu device* needs to be runtime suspended to ensure
>>>>>> that the iommu driver removes its vote on cx gdsc.
>>>>>>
>>>>>> Signed-off-by: Akhil P Oommen <quic_akhilpo@...cinc.com>
>>>>>> ---
>>>>>>
>>>>>> (no changes since v1)
>>>>>>
>>>>>>     drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 16 ++++++++++++++--
>>>>>>     drivers/gpu/drm/msm/msm_gpu.c         |  2 --
>>>>>>     2 files changed, 14 insertions(+), 4 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>>>> index 4d50110..7ed347c 100644
>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>>>> @@ -1278,8 +1278,20 @@ static void a6xx_recover(struct msm_gpu *gpu)
>>>>>>             */
>>>>>>            gmu_write(&a6xx_gpu->gmu, REG_A6XX_GMU_GMU_PWR_COL_KEEPALIVE, 0);
>>>>>>
>>>>>> -       gpu->funcs->pm_suspend(gpu);
>>>>>> -       gpu->funcs->pm_resume(gpu);
>>>>>> +       /*
>>>>>> +        * Now drop all the pm_runtime usage count to allow cx gdsc to collapse.
>>>>>> +        * First drop the usage count from all active submits
>>>>>> +        */
>>>>>> +       for (i = gpu->active_submits; i > 0; i--)
>>>>>> +               pm_runtime_put(&gpu->pdev->dev);
>>>>>> +
>>>>>> +       /* And the final one from recover worker */
>>>>>> +       pm_runtime_put_sync(&gpu->pdev->dev);
>>>>>> +
>>>>>> +       for (i = gpu->active_submits; i > 0; i--)
>>>>>> +               pm_runtime_get(&gpu->pdev->dev);
>>>>>> +
>>>>>> +       pm_runtime_get_sync(&gpu->pdev->dev);
>>>>> In response to v1, Rob suggested pm_runtime_force_suspend/resume().
>>>>> Those seem like they would work to me, too. Why not use them?
>>>> Quoting my previous response which I seem to have sent only to Freedreno
>>>> list:
>>>>
>>>> "I believe it is supposed to be used only during system sleep state
>>>> transitions. Btw, we don't want pm_runtime_get() calls from elsewhere to
>>>> fail by disabling RPM here."
>>> The comment about not wanting other runpm calls to fail is valid.. but
>>> that is also solveable, ie. by holding a lock around runpm calls.
>>> Which I think we need to do anyways, otherwise looping over
>>> gpu->active_submits is racey..
>>>
>>> I think pm_runtime_force_suspend/resume() is the least-bad option.. or
>>> at least I'm not seeing any obvious alternative that is better
>>>
>>> BR,
>>> -R
>> We are holding gpu->lock here which will block further submissions from
>> scheduler. Will active_submits still race?
>>
>> It is possible that there is another thread which successfully completed
>> pm_runtime_get() and while it access the hardware, we pulled the plug on
>> regulator/clock here. That will result in obvious device crash. So I can
>> think of 2 solutions:
>>
>> 1. wrap *every* pm_runtime_get/put with a mutex. Something like:
>>               mutex_lock();
>>               pm_runtime_get();
>>               < ... access hardware here >>
>>               pm_runtime_put();
>>               mutex_unlock();
>>
>> 2. Drop runtime votes from every submit in recover worker and wait/poll
>> for regulator to collapse in case there are transient votes on
>> regulator  from other threads/subsystems.
>>
>> Option (2) seems simpler to me.  What do you think?
>>
> But I think without #1 you could still be racing w/ some other path
> that touches the hw, like devfreq, right.  They could be holding a
> runpm ref, so even if you loop over active_submits decrementing the
> runpm ref, it still doesn't drop to zero
>
> BR,
> -R
Yes, you are right. There could be some transient votes from other 
threads/drivers/subsystem. This is the reason we need to poll for cx 
gdsc collapse in the next patch.Even with #1, it is difficult to 
coordinate with smmu driver and close to impossible with tz/hyp.

-Akhil.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ