[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <615cd01e-8c8e-910d-8f04-5576ab986ac0@amd.com>
Date: Wed, 25 Apr 2018 09:43:48 -0400
From: Andrey Grodzovsky <Andrey.Grodzovsky@....com>
To: Michel Dänzer <michel@...nzer.net>,
linux-kernel@...r.kernel.org, amd-gfx@...ts.freedesktop.org,
dri-devel@...ts.freedesktop.org, David.Panariti@....com,
oleg@...hat.com, ebiederm@...ssion.com, Alexander.Deucher@....com,
akpm@...ux-foundation.org, Christian.Koenig@....com
Subject: Re: [PATCH 2/3] drm/scheduler: Don't call wait_event_killable for
signaled process.
On 04/24/2018 05:40 PM, Daniel Vetter wrote:
> On Tue, Apr 24, 2018 at 05:02:40PM -0400, Andrey Grodzovsky wrote:
>>
>> On 04/24/2018 03:44 PM, Daniel Vetter wrote:
>>> On Tue, Apr 24, 2018 at 05:46:52PM +0200, Michel Dänzer wrote:
>>>> Adding the dri-devel list, since this is driver independent code.
>>>>
>>>>
>>>> On 2018-04-24 05:30 PM, Andrey Grodzovsky wrote:
>>>>> Avoid calling wait_event_killable when you are possibly being called
>>>>> from get_signal routine since in that case you end up in a deadlock
>>>>> where you are alreay blocked in singla processing any trying to wait
>>>> Multiple typos here, "[...] already blocked in signal processing and [...]"?
>>>>
>>>>
>>>>> on a new signal.
>>>>>
>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@....com>
>>>>> ---
>>>>> drivers/gpu/drm/scheduler/gpu_scheduler.c | 5 +++--
>>>>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>> index 088ff2b..09fd258 100644
>>>>> --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>> +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
>>>>> @@ -227,9 +227,10 @@ void drm_sched_entity_do_release(struct drm_gpu_scheduler *sched,
>>>>> return;
>>>>> /**
>>>>> * The client will not queue more IBs during this fini, consume existing
>>>>> - * queued IBs or discard them on SIGKILL
>>>>> + * queued IBs or discard them when in death signal state since
>>>>> + * wait_event_killable can't receive signals in that state.
>>>>> */
>>>>> - if ((current->flags & PF_SIGNALED) && current->exit_code == SIGKILL)
>>>>> + if (current->flags & PF_SIGNALED)
>>> You want fatal_signal_pending() here, instead of inventing your own broken
>>> version.
>> I rely on current->flags & PF_SIGNALED because this being set from within
>> get_signal,
>> meaning I am within signal processing in which case I want to avoid any
>> signal based wait for that task,
>> From what i see in the code, task_struct.pending.signal is being set for
>> other threads in same
>> group (zap_other_threads) or for other scenarios, those task are still able
>> to receive signals
>> so calling wait_event_killable there will not have problem.
>>>>> entity->fini_status = -ERESTARTSYS;
>>>>> else
>>>>> entity->fini_status = wait_event_killable(sched->job_scheduled,
>>> But really this smells like a bug in wait_event_killable, since
>>> wait_event_interruptible does not suffer from the same bug. It will return
>>> immediately when there's a signal pending.
>> Even when wait_event_interruptible is called as following -
>> ...->do_signal->get_signal->....->wait_event_interruptible ?
>> I haven't tried it but wait_event_interruptible is very much alike to
>> wait_event_killable so I would assume it will also
>> not be interrupted if called like that. (Will give it a try just out of
>> curiosity anyway)
> wait_event_killabel doesn't check for fatal_signal_pending before calling
> schedule, so definitely has a nice race there.
>
> But if you're sure that you really need to check PF_SIGNALED, then I'm
> honestly not clear on what you're trying to pull off here. Your sparse
> explanation of what happens isn't enough, since I have no idea how you can
> get from get_signal() to the above wait_event_killable callsite.
Fatal signal will trigger process termination during which all FDs are
released, including DRM's.
See here -
[<0>] drm_sched_entity_fini+0x10a/0x3a0 [gpu_sched]
[<0>] amdgpu_ctx_do_release+0x129/0x170 [amdgpu]
[<0>] amdgpu_ctx_mgr_fini+0xd5/0xe0 [amdgpu]
[<0>] amdgpu_driver_postclose_kms+0xcd/0x440 [amdgpu]
[<0>] drm_release+0x414/0x5b0 [drm]
[<0>] __fput+0x176/0x350
[<0>] task_work_run+0xa1/0xc0
(From Eric's explanation above is triggered by do_exit->exit_files)
...
[<0>] do_exit+0x48f/0x1280
[<0>] do_group_exit+0x89/0x140
[<0>] get_signal+0x375/0x8f0
[<0>] do_signal+0x79/0xaa0
[<0>] exit_to_usermode_loop+0x83/0xd0
[<0>] do_syscall_64+0x244/0x270
[<0>] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
Andrey
> -Daniel
>
>> Andrey
>>
>>> I think this should be fixed in core code, not papered over in some
>>> subsystem.
>>> -Daniel
>>>
>>>> --
>>>> Earthling Michel Dänzer | http://www.amd.com
>>>> Libre software enthusiast | Mesa and X developer
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@...ts.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@...ts.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Powered by blists - more mailing lists