lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF6AEGt5H=T_0HOLrNqRHZOYNicfk74bgZrQH56k2bYpi5JsRA@mail.gmail.com>
Date:   Sun, 31 Jul 2022 15:15:49 -0700
From:   Rob Clark <robdclark@...il.com>
To:     Akhil P Oommen <quic_akhilpo@...cinc.com>
Cc:     freedreno <freedreno@...ts.freedesktop.org>,
        dri-devel@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Jordan Crouse <jordan@...micpenguin.net>,
        Jonathan Marek <jonathan@...ek.ca>,
        Douglas Anderson <dianders@...omium.org>,
        Matthias Kaehlcke <mka@...omium.org>,
        Abhinav Kumar <quic_abhinavk@...cinc.com>,
        Daniel Vetter <daniel@...ll.ch>,
        David Airlie <airlied@...ux.ie>,
        Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
        Sean Paul <sean@...rly.run>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/8] drm/msm: Take single rpm refcount on behalf of all submits

On Sun, Jul 31, 2022 at 9:33 AM Akhil P Oommen <quic_akhilpo@...cinc.com> wrote:
>
> On 7/31/2022 9:26 PM, Rob Clark wrote:
> > On Sat, Jul 30, 2022 at 2:41 AM Akhil P Oommen <quic_akhilpo@...cinc.com> wrote:
> >> Instead of separate refcount for each submit, take single rpm refcount
> >> on behalf of all the submits. This makes it easier to drop the rpm
> >> refcount during recovery in an upcoming patch.
> >>
> >> Signed-off-by: Akhil P Oommen <quic_akhilpo@...cinc.com>
> >> ---
> >>
> >> (no changes since v1)
> > I see no earlier version of this patch?
> >
> >>   drivers/gpu/drm/msm/msm_gpu.c | 12 ++++++++----
> >>   1 file changed, 8 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> >> index c8cd9bf..e1dd3cc 100644
> >> --- a/drivers/gpu/drm/msm/msm_gpu.c
> >> +++ b/drivers/gpu/drm/msm/msm_gpu.c
> >> @@ -663,11 +663,12 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring,
> >>          mutex_lock(&gpu->active_lock);
> >>          gpu->active_submits--;
> >>          WARN_ON(gpu->active_submits < 0);
> >> -       if (!gpu->active_submits)
> >> +       if (!gpu->active_submits) {
> >>                  msm_devfreq_idle(gpu);
> >> -       mutex_unlock(&gpu->active_lock);
> >> +               pm_runtime_put_autosuspend(&gpu->pdev->dev);
> >> +       }
> >>
> >> -       pm_runtime_put_autosuspend(&gpu->pdev->dev);
> >> +       mutex_unlock(&gpu->active_lock);
> >>
> >>          msm_gem_submit_put(submit);
> >>   }
> >> @@ -756,14 +757,17 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> >>
> >>          /* Update devfreq on transition from idle->active: */
> >>          mutex_lock(&gpu->active_lock);
> >> -       if (!gpu->active_submits)
> >> +       if (!gpu->active_submits) {
> >> +               pm_runtime_get(&gpu->pdev->dev);
> >>                  msm_devfreq_active(gpu);
> >> +       }
> >>          gpu->active_submits++;
> >>          mutex_unlock(&gpu->active_lock);
> >>
> >>          gpu->funcs->submit(gpu, submit);
> >>          gpu->cur_ctx_seqno = submit->queue->ctx->seqno;
> >>
> >> +       pm_runtime_put(&gpu->pdev->dev);
> > this looks unbalanced?
> There is another pm_runtime_get_sync at the top of this function. Just
> before hw_init().
> https://elixir.bootlin.com/linux/v5.19-rc8/source/drivers/gpu/drm/msm/msm_gpu.c#L737

oh, right.. sorry, I was looking at my local stack of WIP patches
which went the opposite direction and moved the runpm into just
msm_job_run().. I'll drop that one

BR,
-R

>
> -Akhil.
> >
> > BR,
> > -R
> >
> >>          hangcheck_timer_reset(gpu);
> >>   }
> >>
> >> --
> >> 2.7.4
> >>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ