lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 4 Mar 2020 09:34:22 +0100
From:   Christian König <christian.koenig@....com>
To:     Jason Ekstrand <jason@...kstrand.net>
Cc:     Bas Nieuwenhuizen <bas@...nieuwenhuizen.nl>,
        Dave Airlie <airlied@...hat.com>,
        Jesse Hall <jessehall@...gle.com>,
        James Jones <jajones@...dia.com>,
        Daniel Stone <daniels@...labora.com>,
        Kristian Høgsberg <hoegsberg@...gle.com>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Chenbo Feng <fengc@...gle.com>,
        Greg Hackmann <ghackmann@...gle.com>,
        linux-media@...r.kernel.org,
        Maling list - DRI developers 
        <dri-devel@...ts.freedesktop.org>, linaro-mm-sig@...ts.linaro.org,
        LKML <linux-kernel@...r.kernel.org>,
        Daniel Vetter <daniel.vetter@...ll.ch>
Subject: Re: [PATCH] RFC: dma-buf: Add an API for importing and exporting sync
 files

Am 03.03.20 um 20:10 schrieb Jason Ekstrand:
> On Thu, Feb 27, 2020 at 2:28 AM Christian König
> <christian.koenig@....com> wrote:
>> [SNIP]
>>> However, I'm not sure what the best way is to do garbage collection on
>>> that so that we don't get an impossibly list of fence arrays.
>> Exactly yes. That's also the reason why the dma_fence_chain container I
>> came up with for the sync timeline stuff has such a rather sophisticated
>> garbage collection.
>>
>> When some of the included fences signal you need to free up the
>> array/chain and make sure that the memory for the container can be reused.
> Currently (as of v2), I'm using dma_fence_array and being careful to
> not bother constructing one if there's only one fence in play.  Is
> this insufficient?  If so, maybe we should consider improving
> dma_fence_array.

That still won't work correctly in all cases. See the problem is not 
only optimization, but also avoiding situations where userspace can 
abuse the interface to do nasty things.

For example if userspace just calls that function in a loop you can 
create a long chain of dma_fence_array objects.

If that chain is then suddenly released the recursive dropping of 
references can overwrite the kernel stack.

For reference see what dance is necessary in the dma_fence_chain_release 
function to avoid that:
>         /* Manually unlink the chain as much as possible to avoid 
> recursion
>          * and potential stack overflow.
>          */
>         while ((prev = rcu_dereference_protected(chain->prev, true))) {
....

It took me quite a while to figure out how to do this without causing 
issues. But I don't see how this would be possible for dma_fence_array.

As far as I can see the only real option to implement this would be to 
change the dma_resv object container so that you can add fences without 
overriding existing ones.

For shared fences that can be done relative easily, but I absolutely 
don't see how to do this for exclusive ones without a larger rework.

>>>    (Note
>>> the dma_resv has a lock that needs to be taken before adding an
>>> exclusive fence, might be useful). Some code that does a thing like
>>> this is __dma_resv_make_exclusive in
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
>> Wanted to move that into dma_resv.c for quite a while since there are
>> quite a few other cases where we need this.
> I've roughly done that.  The primary difference is that my version
> takes an optional additional fence to add to the array.  This makes it
> a bit more complicated but I think I got it mostly right.
>
> I've also written userspace code which exercises this and it seems to
> work.  Hopefully, that will give a better idea of what I'm trying to
> accomplish.

Yes, that is indeed a really nice to have feature.

Regards,
Christian.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ