lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 29 Nov 2017 15:37:51 +0300
From:   Dmitry Osipenko <digetx@...il.com>
To:     Mikko Perttunen <cyndis@...si.fi>,
        Mikko Perttunen <mperttunen@...dia.com>,
        thierry.reding@...il.com, jonathanh@...dia.com
Cc:     dri-devel@...ts.freedesktop.org, linux-tegra@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/10] gpu: host1x: Optionally block when acquiring
 channel

On 29.11.2017 15:25, Mikko Perttunen wrote:
> On 29.11.2017 14:18, Dmitry Osipenko wrote:
>> On 29.11.2017 12:10, Mikko Perttunen wrote:
>>> On 12.11.2017 13:23, Dmitry Osipenko wrote:
>>>> On 11.11.2017 00:15, Dmitry Osipenko wrote:
>>>>> On 07.11.2017 18:29, Dmitry Osipenko wrote:
>>>>>> On 07.11.2017 16:11, Mikko Perttunen wrote:
>>>>>>> On 05.11.2017 19:14, Dmitry Osipenko wrote:
>>>>>>>> On 05.11.2017 14:01, Mikko Perttunen wrote:
>>>>>>>>> Add an option to host1x_channel_request to interruptibly wait for a
>>>>>>>>> free channel. This allows IOCTLs that acquire a channel to block
>>>>>>>>> the userspace.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Wouldn't it be more optimal to request channel and block after job's
>>>>>>>> pining,
>>>>>>>> when all patching and checks are completed? Note that right now we have
>>>>>>>> locking
>>>>>>>> around submission in DRM, which I suppose should go away by making locking
>>>>>>>> fine
>>>>>>>> grained.
>>>>>>>
>>>>>>> That would be possible, but I don't think it should matter much since
>>>>>>> contention
>>>>>>> here should not be the common case.
>>>>>>>
>>>>>>>>
>>>>>>>> Or maybe it would be more optimal to just iterate over channels, like I
>>>>>>>> suggested before [0]?
>>>>>>>
>>>>>>> Somehow I hadn't noticed this before, but this would break the invariant of
>>>>>>> having one client/class per channel.
>>>>>>>
>>>>>>
>>>>>> Yes, currently there is a weak relation of channel and clients device, but
>>>>>> seems
>>>>>> channels device is only used for printing dev_* messages and device could be
>>>>>> borrowed from the channels job. I don't see any real point of hardwiring
>>>>>> channel
>>>>>> to a specific device or client.
>>>>>
>>>>> Although, it won't work with syncpoint assignment to channel.
>>>>
>>>> On the other hand.. it should work if one syncpoint could be assigned to
>>>> multiple channels, couldn't it?
>>>
>>> A syncpoint can only be mapped to a single channel, so unfortunately this won't
>>> work.
>> Okay, in DRM we are requesting syncpoint on channels 'open' and syncpoint
>> assignment happens on jobs submission. So firstly submitted job will assign
>> syncpoint to the first channel and second job would re-assign syncpoint to a
>> second channel while first job is still in-progress, how is it going to work?
>>
> 
> When a context is created, it's assigned both a syncpoint and channel and this
> pair stays for as long as the context is alive (i.e. as long as there are jobs),
> so even if the syncpoint is reassigned to a channel at every submit, it is
> always assigned to the same channel, so nothing breaks. Multiple contexts cannot
> share syncpoints so things work out.
> 
> Obviously this is not ideal as we currently never unassign syncpoints but at
> least it is not broken.

Right, I forgot that you made tegra_drm_context_get_channel() to re-use
requested channel if there are pending jobs.

Powered by blists - more mailing lists