lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAF6AEGs5ROH0xqCwZKs2JaUvoOiEOmyqneLCW9eQDOJhPqNLFQ@mail.gmail.com>
Date:   Thu, 27 Apr 2023 07:31:39 -0700
From:   Rob Clark <robdclark@...il.com>
To:     Rob Clark <robdclark@...il.com>,
        Emil Velikov <emil.l.velikov@...il.com>,
        Rob Clark <robdclark@...omium.org>,
        Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>,
        Akhil P Oommen <quic_akhilpo@...cinc.com>,
        Abhinav Kumar <quic_abhinavk@...cinc.com>,
        dri-devel@...ts.freedesktop.org,
        open list <linux-kernel@...r.kernel.org>,
        Konrad Dybcio <konrad.dybcio@...aro.org>,
        "open list:DRM DRIVER FOR MSM ADRENO GPU" 
        <linux-arm-msm@...r.kernel.org>,
        Dmitry Baryshkov <dmitry.baryshkov@...aro.org>,
        "open list:DRM DRIVER FOR MSM ADRENO GPU" 
        <freedreno@...ts.freedesktop.org>, Sean Paul <sean@...rly.run>
Subject: Re: [RFC 2/3] drm/msm: Rework get_comm_cmdline() helper

On Thu, Apr 27, 2023 at 2:39 AM Daniel Vetter <daniel@...ll.ch> wrote:
>
> On Fri, Apr 21, 2023 at 07:47:26AM -0700, Rob Clark wrote:
> > On Fri, Apr 21, 2023 at 2:33 AM Emil Velikov <emil.l.velikov@...il.com> wrote:
> > >
> > > Greeting all,
> > >
> > > Sorry for the delay - Easter Holidays, food coma and all that :-)
> > >
> > > On Tue, 18 Apr 2023 at 15:31, Rob Clark <robdclark@...il.com> wrote:
> > > >
> > > > On Tue, Apr 18, 2023 at 1:34 AM Daniel Vetter <daniel@...ll.ch> wrote:
> > > > >
> > > > > On Tue, Apr 18, 2023 at 09:27:49AM +0100, Tvrtko Ursulin wrote:
> > > > > >
> > > > > > On 17/04/2023 21:12, Rob Clark wrote:
> > > > > > > From: Rob Clark <robdclark@...omium.org>
> > > > > > >
> > > > > > > Make it work in terms of ctx so that it can be re-used for fdinfo.
> > > > > > >
> > > > > > > Signed-off-by: Rob Clark <robdclark@...omium.org>
> > > > > > > ---
> > > > > > >   drivers/gpu/drm/msm/adreno/adreno_gpu.c |  4 ++--
> > > > > > >   drivers/gpu/drm/msm/msm_drv.c           |  2 ++
> > > > > > >   drivers/gpu/drm/msm/msm_gpu.c           | 13 ++++++-------
> > > > > > >   drivers/gpu/drm/msm/msm_gpu.h           | 12 ++++++++++--
> > > > > > >   drivers/gpu/drm/msm/msm_submitqueue.c   |  1 +
> > > > > > >   5 files changed, 21 insertions(+), 11 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > > > > > > index bb38e728864d..43c4e1fea83f 100644
> > > > > > > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > > > > > > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > > > > > > @@ -412,7 +412,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
> > > > > > >             /* Ensure string is null terminated: */
> > > > > > >             str[len] = '\0';
> > > > > > > -           mutex_lock(&gpu->lock);
> > > > > > > +           mutex_lock(&ctx->lock);
> > > > > > >             if (param == MSM_PARAM_COMM) {
> > > > > > >                     paramp = &ctx->comm;
> > > > > > > @@ -423,7 +423,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
> > > > > > >             kfree(*paramp);
> > > > > > >             *paramp = str;
> > > > > > > -           mutex_unlock(&gpu->lock);
> > > > > > > +           mutex_unlock(&ctx->lock);
> > > > > > >             return 0;
> > > > > > >     }
> > > > > > > diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> > > > > > > index 3d73b98d6a9c..ca0e89e46e13 100644
> > > > > > > --- a/drivers/gpu/drm/msm/msm_drv.c
> > > > > > > +++ b/drivers/gpu/drm/msm/msm_drv.c
> > > > > > > @@ -581,6 +581,8 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
> > > > > > >     rwlock_init(&ctx->queuelock);
> > > > > > >     kref_init(&ctx->ref);
> > > > > > > +   ctx->pid = get_pid(task_pid(current));
> > > > > >
> > > > > > Would it simplify things for msm if DRM core had an up to date file->pid as
> > > > > > proposed in
> > > > > > https://patchwork.freedesktop.org/patch/526752/?series=109902&rev=4 ? It
> > > > > > gets updated if ioctl issuer is different than fd opener and this being
> > > > > > context_init here reminded me of it. Maybe you wouldn't have to track the
> > > > > > pid in msm?
> > > >
> > > > The problem is that we also need this for gpu devcore dumps, which
> > > > could happen after the drm_file is closed.  The ctx can outlive the
> > > > file.
> > > >
> > > I think we all kept forgetting about that. MSM had support for ages,
> > > while AMDGPU is the second driver to land support - just a release
> > > ago.
> > >
> > > > But the ctx->pid has the same problem as the existing file->pid when
> > > > it comes to Xorg.. hopefully over time that problem just goes away.
> > >
> > > Out of curiosity: what do you mean with "when it comes to Xorg" - the
> > > "was_master" handling or something else?
> >
> > The problem is that Xorg is the one to open the drm fd, and then
> > passes the fd to the client.. so the pid of drm_file is the Xorg pid,
> > not the client.  Making it not terribly informative.
> >
> > Tvrtko's patch he linked above would address that for drm_file, but
> > not for other driver internal usages.  Maybe it could be wired up as a
> > helper so that drivers don't have to re-invent that dance.  Idk, I
> > have to think about it.
> >
> > Btw, with my WIP drm sched fence signalling patch lockdep is unhappy
> > when gpu devcore dumps are triggered.  I'm still pondering how to
> > decouple the locking so that anything coming from fs (ie.
> > show_fdinfo()) is decoupled from anything that happens in the fence
> > signaling path.  But will repost this series once I get that sorted
> > out.
>
> So the cleanest imo is that you push most of the capturing into a worker
> that's entirely decoupled. If you have terminal context (i.e. on first
> hang they stop all further cmd submission, which is anyway what
> vk/arb_robustness want), then you don't have to capture at tdr time,
> because there's no subsequent batch that will wreck the state.

It is already in a worker, but we (a) need to block other contexts
from submitting while at the same time (b) using the GPU itself to
capture its state.. (yes, the way the hw works is overly complicated
in this regard)

> But it only works if your gpu ctx don't have recoverable semantics.

We do have recoverable semantics.. but that is pretty orthogonal.  We
just need a different lock.. I have a plan to move (a copy) of the
override strings to drm_file with it's own locking decoupled from what
we need in the recovery path.. and hopefully will finally have time to
type it up today and post it (just before disappearing off into the
woods to go backpacking ;-))

BR,
-R

> If you can't do that it's a _lot_ of GFP_ATOMIC and trylock and bailing
> out if any fails :-/
> -Daniel
>
> >
> > BR,
> > -R
> >
> > >
> > > > guess I could do a similar dance to your patch to update the pid
> > > > whenever (for ex) a submitqueue is created.
> > > >
> > > > > Can we go one step further and let the drm fdinfo stuff print these new
> > > > > additions? Consistency across drivers and all that.
> > > >
> > > > Hmm, I guess I could _also_ store the overridden comm/cmdline in
> > > > drm_file.  I still need to track it in ctx (msm_file_private) because
> > > > I could need it after the file is closed.
> > > >
> > > > Maybe it could be useful to have a gl extension to let the app set a
> > > > name on the context so that this is useful beyond native-ctx (ie.
> > > > maybe it would be nice to see that "chrome: lwn.net" is using less gpu
> > > > memory than "chrome: phoronix.com", etc)
> > > >
> > >
> > > /me awaits for the series to hit the respective websites ;-)
> > >
> > > But seriously - the series from Tvrtko (thanks for the link, will
> > > check in a moment) makes sense. Although given the livespan issue
> > > mentioned above, I don't think it's applicable here.
> > >
> > > So if it were me, I would consider the two orthogonal for the
> > > short/mid term. Fwiw this and patch 1/3 are:
> > > Reviewed-by: Emil Velikov <emil.l.velikov@...il.com>
> > >
> > > HTH
> > > -Emil
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ