[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF6AEGtPrSdj=7AP1_puR+OgmL-qro0mWZDNngtaVPxpaCM76A@mail.gmail.com>
Date: Thu, 17 Mar 2022 11:25:31 -0700
From: Rob Clark <robdclark@...il.com>
To: Andrey Grodzovsky <andrey.grodzovsky@....com>
Cc: Christian König <christian.koenig@....com>,
dri-devel <dri-devel@...ts.freedesktop.org>,
freedreno <freedreno@...ts.freedesktop.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Rob Clark <robdclark@...omium.org>,
Sean Paul <sean@...rly.run>,
Abhinav Kumar <quic_abhinavk@...cinc.com>,
David Airlie <airlied@...ux.ie>,
Akhil P Oommen <quic_akhilpo@...cinc.com>,
Jonathan Marek <jonathan@...ek.ca>,
AngeloGioacchino Del Regno
<angelogioacchino.delregno@...labora.com>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Vladimir Lypak <vladimir.lypak@...il.com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] drm/msm/gpu: Park scheduler threads for system suspend
On Thu, Mar 17, 2022 at 11:10 AM Andrey Grodzovsky
<andrey.grodzovsky@....com> wrote:
>
>
> On 2022-03-17 13:35, Rob Clark wrote:
> > On Thu, Mar 17, 2022 at 9:45 AM Christian König
> > <christian.koenig@....com> wrote:
> >> Am 17.03.22 um 17:18 schrieb Rob Clark:
> >>> On Thu, Mar 17, 2022 at 9:04 AM Christian König
> >>> <christian.koenig@....com> wrote:
> >>>> Am 17.03.22 um 16:10 schrieb Rob Clark:
> >>>>> [SNIP]
> >>>>> userspace frozen != kthread frozen .. that is what this patch is
> >>>>> trying to address, so we aren't racing between shutting down the hw
> >>>>> and the scheduler shoveling more jobs at us.
> >>>> Well exactly that's the problem. The scheduler is supposed to shoveling
> >>>> more jobs at us until it is empty.
> >>>>
> >>>> Thinking more about it we will then keep some dma_fence instance
> >>>> unsignaled and that is and extremely bad idea since it can lead to
> >>>> deadlocks during suspend.
> >>> Hmm, perhaps that is true if you need to migrate things out of vram?
> >>> It is at least not a problem when vram is not involved.
> >> No, it's much wider than that.
> >>
> >> See what can happen is that the memory management shrinkers want to wait
> >> for a dma_fence during suspend.
> > we don't wait on fences in shrinker, only purging or evicting things
> > that are already ready. Actually, waiting on fences in shrinker path
> > sounds like a pretty bad idea.
> >
> >> And if you stop the scheduler they will just wait forever.
> >>
> >> What you need to do instead is to drain the scheduler, e.g. call
> >> drm_sched_entity_flush() with a proper timeout for each entity you have
> >> created.
> > yeah, it would work to drain the scheduler.. I guess that might be the
> > more portable approach as far as generic solution for suspend.
> >
> > BR,
> > -R
>
>
> I am not sure how this drains the scheduler ? Suppose we done the
> waiting in drm_sched_entity_flush,
> what prevents someone to push right away another job into the same
> entity's queue right after that ?
> Shouldn't we first disable further pushing of jobs into entity before we
> wait for sched->job_scheduled ?
>
In the system suspend path, userspace processes will have already been
frozen, so there should be no way to push more jobs to the scheduler,
unless they are pushed from the kernel itself. We don't do that in
drm/msm, but maybe you need to to move things btwn vram and system
memory? But even in that case, if the # of jobs you push is bounded I
guess that is ok?
BR,
-R
Powered by blists - more mailing lists