[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <96200931-fb3f-40a2-8c0d-5a3609c11207@redhat.com>
Date: Thu, 9 Nov 2023 20:09:30 +0100
From: Danilo Krummrich <dakr@...hat.com>
To: Luben Tuikov <ltuikov89@...il.com>, airlied@...il.com,
daniel@...ll.ch, christian.koenig@....com
Cc: linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH] drm/sched: fix potential page fault in
drm_sched_job_init()
On 11/9/23 05:23, Luben Tuikov wrote:
> On 2023-11-08 19:09, Danilo Krummrich wrote:
>> On 11/8/23 06:46, Luben Tuikov wrote:
>>> Hi,
>>>
>>> Could you please use my gmail address, the one one I'm responding from--I don't want
>>> to miss any DRM scheduler patches. BTW, the luben.tuikov@....com email should bounce
>>> as undeliverable.
>>>
>>> On 2023-11-07 21:26, Danilo Krummrich wrote:
>>>> Commit 56e449603f0a ("drm/sched: Convert the GPU scheduler to variable
>>>> number of run-queues") introduces drm_err() in drm_sched_job_init(), in
>>>> order to indicate that the given entity has no runq, however at this
>>>> time job->sched is not yet set, likely to be NULL initialized, and hence
>>>> shouldn't be used.
>>>>
>>>> Replace the corresponding drm_err() call with pr_err() to avoid a
>>>> potential page fault.
>>>>
>>>> While at it, extend the documentation of drm_sched_job_init() to
>>>> indicate that job->sched is not a valid pointer until
>>>> drm_sched_job_arm() has been called.
>>>>
>>>> Fixes: 56e449603f0a ("drm/sched: Convert the GPU scheduler to variable number of run-queues")
>>>> Signed-off-by: Danilo Krummrich <dakr@...hat.com>
>>>> ---
>>>> drivers/gpu/drm/scheduler/sched_main.c | 5 ++++-
>>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>>> index 27843e37d9b7..dd28389f0ddd 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>> @@ -680,6 +680,9 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs);
>>>> * This function returns -ENOENT in this case (which probably should be -EIO as
>>>> * a more meanigful return value).
>>>> *
>>>> + * Note that job->sched is not a valid pointer until drm_sched_job_arm() has
>>>> + * been called.
>>>> + *
>>>
>>> Good catch!
>>>
>>> Did you actually get this to page-fault and have a kernel log?
>>
>> No, I just found it because I was about to make the same mistake.
>>
>>>
>>> I'm asking because we see it correctly set in this kernel log coming from AMD,
>>
>> I think that's because amdgpu just sets job->sched to *some* scheduler instance after
>> job allocation [1].
>>
>> [1] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c#L108
>>
>>>
>>> [ 11.886024] amdgpu 0000:0a:00.0: [drm] *ERROR* drm_sched_job_init: entity has no rq!
>>>
>>> in this email,
>>> https://lore.kernel.org/r/CADnq5_PS64jYS_Y3kGW27m-kuWP+FQFiaVcOaZiB=JLSgPnXBQ@mail.gmail.com
>>>
>>>> * Returns 0 for success, negative error code otherwise.
>>>> */
>>>> int drm_sched_job_init(struct drm_sched_job *job,
>>>> @@ -691,7 +694,7 @@ int drm_sched_job_init(struct drm_sched_job *job,
>>>> * or worse--a blank screen--leave a trail in the
>>>> * logs, so this can be debugged easier.
>>>> */
>>>> - drm_err(job->sched, "%s: entity has no rq!\n", __func__);
>>>> + pr_err("%s: entity has no rq!\n", __func__);
>>>
>>> Is it feasible to do something like the following?
>>>
>>> dev_err(job->sched ? job->sched->dev : NULL, "%s: entity has no rq!\n", __func__);
>>
>> I don't think that's a good idea. Although I'd assume that every driver zero-initializes its job
>> structures, I can't see a rule enforcing that. Hence, job->sched can be a random value until
>> drm_sched_job_arm() is called.
>
> Okay. However, when using pr_err() we're losing "[drm] *ERROR* " prefix and we scan for that
> in the logs to quickly find the cause of the error.
>
> Perhaps we can define pr_fmt() and also include "*ERROR*" so that we can get the desired result
> as the attached patch shows?
Sure, I'd add the pr_fmt() in a separate patch though.
Powered by blists - more mailing lists