lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a009c207-a5fa-af1e-b961-8083b48360bf@gmail.com>
Date:   Wed, 1 Jun 2022 16:38:51 +0200
From:   Christian König <ckoenig.leichtzumerken@...il.com>
To:     Sergey Senozhatsky <senozhatsky@...omium.org>
Cc:     Christian König <christian.koenig@....com>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Gustavo Padovan <gustavo@...ovan.org>,
        Tomasz Figa <tfiga@...omium.org>,
        Ricardo Ribalda <ribalda@...omium.org>,
        Christoph Hellwig <hch@...radead.org>,
        linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        linaro-mm-sig@...ts.linaro.org, linux-kernel@...r.kernel.org
Subject: Re: [Linaro-mm-sig] Re: [PATCH] dma-fence: allow dma fence to have
 their own lock

Am 01.06.22 um 16:27 schrieb Sergey Senozhatsky:
> On (22/06/01 14:45), Christian König wrote:
>> Am 31.05.22 um 04:51 schrieb Sergey Senozhatsky:
>>> On (22/05/30 16:55), Christian König wrote:
>>>> Am 30.05.22 um 16:22 schrieb Sergey Senozhatsky:
>>>>> [SNIP]
>>>>> So the `lock` should have at least same lifespan as the DMA fence
>>>>> that borrows it, which is impossible to guarantee in our case.
>>>> Nope, that's not correct. The lock should have at least same lifespan as the
>>>> context of the DMA fence.
>>> How does one know when it's safe to release the context? DMA fence
>>> objects are still transparently refcount-ed and "live their own lives",
>>> how does one synchronize lifespans?
>> Well, you don't.
>>
>> If you have a dynamic context structure you need to reference count that as
>> well. In other words every time you create a fence in your context you need
>> to increment the reference count and every time a fence is release you
>> decrement it.
> OK then fence release should be able to point back to its "context"
> structure. Either a "private" data in dma fence or we need to "embed"
> fence into another object (refcounted) that owns the lock and provide
> dma fence ops->release callback, which can container_of() to the object
> that dma fence is embedded into.
>
> I think you are suggesting the latter. Thanks for clarifications.

Daniel might hurt me for this, but if you really only need a pointer to 
your context then we could say that using a pointer value for the 
context field is ok as well.

That should be fine as well as long as you can guarantee that it will be 
unique during the lifetime of all it's fences.

We would just have to adjust the documentation a bit.

> The limiting factor of this approach is that now our ops->release() is
> under the same "pressure" as dma_fence_put()->dma_fence_release() are.
> dma_fence_put() and dma_fence_release() can be called from any context,
> as far as I understand, e.g. IRQ, however our normal object ->release
> can schedule, we do things like synchronize_rcu() and so on. Nothing is
> impossible, just saying that even this approach is not 100% perfect and
> may need additional workarounds.

Well just use a work item for release.

Regards,
Christian.

>> If you have a static context structure like most drivers have then you must
>> make sure that all fences at least signal before you unload your driver. We
>> still somewhat have a race when you try to unload a driver and the fence_ops
>> structure suddenly disappear, but we currently live with that.
> Hmm, indeed... I didn't consider fence_ops case.
>
>> Apart from that you are right, fences can live forever and we need to deal
>> with that.
> OK. I see.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ