lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a4adb6ac-b72e-3f9b-fc6c-2a56bc6537ce@kapsi.fi>
Date:   Wed, 29 Nov 2017 14:25:17 +0200
From:   Mikko Perttunen <cyndis@...si.fi>
To:     Dmitry Osipenko <digetx@...il.com>,
        Mikko Perttunen <mperttunen@...dia.com>,
        thierry.reding@...il.com, jonathanh@...dia.com
Cc:     dri-devel@...ts.freedesktop.org, linux-tegra@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/10] gpu: host1x: Optionally block when acquiring
 channel

On 29.11.2017 14:18, Dmitry Osipenko wrote:
> On 29.11.2017 12:10, Mikko Perttunen wrote:
>> On 12.11.2017 13:23, Dmitry Osipenko wrote:
>>> On 11.11.2017 00:15, Dmitry Osipenko wrote:
>>>> On 07.11.2017 18:29, Dmitry Osipenko wrote:
>>>>> On 07.11.2017 16:11, Mikko Perttunen wrote:
>>>>>> On 05.11.2017 19:14, Dmitry Osipenko wrote:
>>>>>>> On 05.11.2017 14:01, Mikko Perttunen wrote:
>>>>>>>> Add an option to host1x_channel_request to interruptibly wait for a
>>>>>>>> free channel. This allows IOCTLs that acquire a channel to block
>>>>>>>> the userspace.
>>>>>>>>
>>>>>>>
>>>>>>> Wouldn't it be more optimal to request channel and block after job's pining,
>>>>>>> when all patching and checks are completed? Note that right now we have
>>>>>>> locking
>>>>>>> around submission in DRM, which I suppose should go away by making locking
>>>>>>> fine
>>>>>>> grained.
>>>>>>
>>>>>> That would be possible, but I don't think it should matter much since
>>>>>> contention
>>>>>> here should not be the common case.
>>>>>>
>>>>>>>
>>>>>>> Or maybe it would be more optimal to just iterate over channels, like I
>>>>>>> suggested before [0]?
>>>>>>
>>>>>> Somehow I hadn't noticed this before, but this would break the invariant of
>>>>>> having one client/class per channel.
>>>>>>
>>>>>
>>>>> Yes, currently there is a weak relation of channel and clients device, but
>>>>> seems
>>>>> channels device is only used for printing dev_* messages and device could be
>>>>> borrowed from the channels job. I don't see any real point of hardwiring
>>>>> channel
>>>>> to a specific device or client.
>>>>
>>>> Although, it won't work with syncpoint assignment to channel.
>>>
>>> On the other hand.. it should work if one syncpoint could be assigned to
>>> multiple channels, couldn't it?
>>
>> A syncpoint can only be mapped to a single channel, so unfortunately this won't
>> work.
> Okay, in DRM we are requesting syncpoint on channels 'open' and syncpoint
> assignment happens on jobs submission. So firstly submitted job will assign
> syncpoint to the first channel and second job would re-assign syncpoint to a
> second channel while first job is still in-progress, how is it going to work?
>

When a context is created, it's assigned both a syncpoint and channel 
and this pair stays for as long as the context is alive (i.e. as long as 
there are jobs), so even if the syncpoint is reassigned to a channel at 
every submit, it is always assigned to the same channel, so nothing 
breaks. Multiple contexts cannot share syncpoints so things work out.

Obviously this is not ideal as we currently never unassign syncpoints 
but at least it is not broken.

Mikko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ