lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e7f0d52-8d31-2ebb-47c3-865d676e8158@amd.com>
Date:   Thu, 26 Apr 2018 16:43:37 -0400
From:   Andrey Grodzovsky <Andrey.Grodzovsky@....com>
To:     "Eric W. Biederman" <ebiederm@...ssion.com>
Cc:     Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org,
        amd-gfx@...ts.freedesktop.org, Alexander.Deucher@....com,
        Christian.Koenig@....com, David.Panariti@....com,
        akpm@...ux-foundation.org
Subject: Re: [PATCH 2/3] drm/scheduler: Don't call wait_event_killable for
 signaled process.



On 04/26/2018 11:57 AM, Eric W. Biederman wrote:
> Andrey Grodzovsky <Andrey.Grodzovsky@....com> writes:
>
>> On 04/26/2018 08:34 AM, Andrey Grodzovsky wrote:
>>>
>>> On 04/25/2018 08:01 PM, Eric W. Biederman wrote:
>>>> Andrey Grodzovsky <Andrey.Grodzovsky@....com> writes:
>>>>
>>>>> On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
>>>>>> On 04/25, Andrey Grodzovsky wrote:
>>>>>>> here (drm_sched_entity_fini) is also a bad idea, but we still want to be
>>>>>>> able to exit immediately
>>>>>>> and not wait for GPU jobs completion when the reason for reaching this
>>>>>>> code
>>>>>>> is because of KILL
>>>>>>> signal to the user process who opened the device file.
>>>>>> Can you hook f_op->flush method?
>>>>> But this one is called for each task releasing a reference to the the file,
>>>>> so
>>>>> not sure I see how this solves the problem.
>>>> The big question is why do you need to wait during the final closing a
>>>> file?
>>>>
>>>> The wait can be terminated so the wait does not appear to be simply a
>>>> matter of correctness.
>>> Well, as I understand it, it just means that you don't want to abruptly
>>> terminate GPU work in progress without a good
>>> reason (such as KILL signal). When we exit we are going to release various
>>> resources GPU is still using so we either
>>> wait for it to complete or terminate the remaining jobs.
> At the point of do_exit you might as well be a KILL signal however you
> got there.
>
>> Looked more into code, some correction, drm_sched_entity_fini means the SW job
>> queue itself is about to die, so we must
>> either wait for completion or terminate any outstanding jobs that are still in
>> the SW queue. Anything which already in flight in HW
>> will still complete.
> It sounds like we don't care if we block the process that had the file
> descriptor open, this is just book keeping.  Which allows having a piece
> of code that cleans up resources when the GPU is done with the queue but
> does not make userspace wait.  (option 1)
>
> For it to make sense that we let the process run there has to be
> something that cares about the results being completed.  If all of the
> file descriptors are closed and the process is killed I can't see who
> will care that the software queue will continue to be processed.  So it
> may be reasonable to simply kill the queue (option 2).
>
> If userspace really needs the wait it is probably better done in
> f_op->flush so that every close of the file descriptor blocks
> until the queue is flushed (option 3).
>
> Do you know if userspace cares about the gpu operations completing?

I don't have a good answer for that, I would assume it depends on type
of jobs still remaining unprocessed and on the general type of work the user
process is doing.

Some key people who can answer this are currently away for a few days/week
so the answer for this will have to wait a bit.

Andrey

>
> My skim of the code suggests that nothing actually cares about those
> operations, but I really don't know the gpu well.
>
> Eric

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ