[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAP+8YyEkp7PuFZEy0_zVUsJem8dCjWpuznJ4Ysaa2JoXs7iGVQ@mail.gmail.com>
Date: Tue, 2 May 2023 11:30:05 +0200
From: Bas Nieuwenhuizen <bas@...nieuwenhuizen.nl>
To: Timur Kristóf <timur.kristof@...il.com>
Cc: Christian König <christian.koenig@....com>,
André Almeida <andrealmeid@...lia.com>,
Alex Deucher <alexdeucher@...il.com>,
dri-devel <dri-devel@...ts.freedesktop.org>,
amd-gfx list <amd-gfx@...ts.freedesktop.org>,
linux-kernel@...r.kernel.org,
"Pelloux-Prayer, Pierre-Eric" <pierre-eric.pelloux-prayer@....com>,
Marek Olšák <maraeo@...il.com>,
michel.daenzer@...lbox.org,
Samuel Pitoiset <samuel.pitoiset@...il.com>,
kernel-dev@...lia.com,
"Deucher, Alexander" <alexander.deucher@....com>
Subject: Re: [RFC PATCH 0/1] Add AMDGPU_INFO_GUILTY_APP ioctl
On Tue, May 2, 2023 at 11:12 AM Timur Kristóf <timur.kristof@...il.com> wrote:
>
> Hi Christian,
>
> Christian König <christian.koenig@....com> ezt írta (időpont: 2023. máj. 2., Ke 9:59):
>>
>> Am 02.05.23 um 03:26 schrieb André Almeida:
>> > Em 01/05/2023 16:24, Alex Deucher escreveu:
>> >> On Mon, May 1, 2023 at 2:58 PM André Almeida <andrealmeid@...lia.com>
>> >> wrote:
>> >>>
>> >>> I know that devcoredump is also used for this kind of information,
>> >>> but I believe
>> >>> that using an IOCTL is better for interfacing Mesa + Linux rather
>> >>> than parsing
>> >>> a file that its contents are subjected to be changed.
>> >>
>> >> Can you elaborate a bit on that? Isn't the whole point of devcoredump
>> >> to store this sort of information?
>> >>
>> >
>> > I think that devcoredump is something that you could use to submit to
>> > a bug report as it is, and then people can read/parse as they want,
>> > not as an interface to be read by Mesa... I'm not sure that it's
>> > something that I would call an API. But I might be wrong, if you know
>> > something that uses that as an API please share.
>> >
>> > Anyway, relying on that for Mesa would mean that we would need to
>> > ensure stability for the file content and format, making it less
>> > flexible to modify in the future and probe to bugs, while the IOCTL is
>> > well defined and extensible. Maybe the dump from Mesa + devcoredump
>> > could be complementary information to a bug report.
>>
>> Neither using an IOCTL nor devcoredump is a good approach for this since
>> the values read from the hw register are completely unreliable. They
>> could not be available because of GFXOFF or they could be overwritten or
>> not even updated by the CP in the first place because of a hang etc....
>>
>> If you want to track progress inside an IB what you do instead is to
>> insert intermediate fence write commands into the IB. E.g. something
>> like write value X to location Y when this executes.
>>
>> This way you can not only track how far the IB processed, but also in
>> which stages of processing we where when the hang occurred. E.g. End of
>> Pipe, End of Shaders, specific shader stages etc...
>
>
> Currently our biggest challenge in the userspace driver is debugging "random" GPU hangs. We have many dozens of bug reports from users which are like: "play the game for X hours and it will eventually hang the GPU". With the currently available tools, it is impossible for us to tackle these issues. André's proposal would be a step in improving this situation.
>
> We already do something like what you suggest, but there are multiple problems with that approach:
>
> 1. we can only submit 1 command buffer at a time because we won't know which IB hanged
> 2. we can't use chaining because we don't know where in the IB it hanged
> 3. it needs userspace to insert (a lot of) extra commands such as extra synchronization and memory writes
> 4. It doesn't work when GPU recovery is enabled because the information is already gone when we detect the hang
>
> Consequences:
>
> A. It has a huge perf impact, so we can't enable it always
> B. Thanks to the extra synchronization, some issues can't be reproduced when this kind of debugging is enabled
> C. We have to ask users to disable GPU recovery to collect logs for us
I think the problem is that the hang debugging in radv combines too
many things. The information here can be gotten easily by adding a
breadcrumb at the start of the cmdbuffer to store the IB address (or
even just cmdbuffer CPU pointer) in the trace buffer. That should be
approximately zero overhead and would give us the same info as this.
I tried to remove (1/2) at some point because with a breadcrumb like
the above I don't think it is necessary, but I think Samuel was
against it at the time? As for all the other synchronization that is
for figuring out which part of the IB hung (e.g. without barriers the
IB processing might have moved past the hanging shader already), and I
don't think this kernel mechanism changes that.
So if we want to make this low overhead we can do this already without
new kernel support, we just need to rework radv a bit.
>
> In my opinion, the correct solution to those problems would be if the kernel could give userspace the necessary information about a GPU hang before a GPU reset. To avoid the massive peformance cost, it would be best if we could know which IB hung and what were the commands being executed when it hung (perhaps pointers to the VA of the commands), along with which shaders were in flight (perhaps pointers to the VA of the shader binaries).
>
> If such an interface could be created, that would mean we could easily query this information and create useful logs of GPU hangs without much userspace overhead and without requiring the user to disable GPU resets etc.
>
> If it's not possible to do this, we'd appreciate some suggestions on how to properly solve this without the massive performance cost and without requiring the user to disable GPU recovery.
>
> Side note, it is also extremely difficult to even determine whether the problem is in userspace or the kernel. While kernel developers usually dismiss all GPU hangs as userspace problems, we've seen many issues where the problem was in the kernel (eg. bugs where wrong voltages were set, etc.) - any idea for tackling those kind of issues is also welcome.
>
> Thanks & best regards,
> Timur
Powered by blists - more mailing lists