lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 4 Jul 2018 10:31:47 +0200
From:   Christian König <ckoenig.leichtzumerken@...il.com>
To:     Michel Dänzer <michel@...nzer.net>,
        Sumit Semwal <sumit.semwal@...aro.org>
Cc:     linaro-mm-sig@...ts.linaro.org, amd-gfx@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        linux-media@...r.kernel.org
Subject: Re: [PATCH] dma-buf: Move BUG_ON from _add_shared_fence to
 _add_shared_inplace

Am 26.06.2018 um 16:31 schrieb Michel Dänzer:
> From: Michel Dänzer <michel.daenzer@....com>
>
> Fixes the BUG_ON spuriously triggering under the following
> circumstances:
>
> * ttm_eu_reserve_buffers processes a list containing multiple BOs using
>    the same reservation object, so it calls
>    reservation_object_reserve_shared with that reservation object once
>    for each such BO.
> * In reservation_object_reserve_shared, old->shared_count ==
>    old->shared_max - 1, so obj->staged is freed in preparation of an
>    in-place update.
> * ttm_eu_fence_buffer_objects calls reservation_object_add_shared_fence
>    once for each of the BOs above, always with the same fence.
> * The first call adds the fence in the remaining free slot, after which
>    old->shared_count == old->shared_max.

Well, the explanation here is not correct. For multiple BOs using the 
same reservation object we won't call 
reservation_object_add_shared_fence() multiple times because we move 
those to the duplicates list in ttm_eu_reserve_buffers().

But this bug can still happen because we call 
reservation_object_add_shared_fence() manually with fences for the same 
context in a couple of places.

One prominent case which comes to my mind are for the VM BOs during 
updates. Another possibility are VRAM BOs which need to be cleared.

>
> In the next call to reservation_object_add_shared_fence, the BUG_ON
> triggers. However, nothing bad would happen in
> reservation_object_add_shared_inplace, since the fence is already in the
> reservation object.
>
> Prevent this by moving the BUG_ON to where an overflow would actually
> happen (e.g. if a buggy caller didn't call
> reservation_object_reserve_shared before).
>
> Cc: stable@...r.kernel.org
> Signed-off-by: Michel Dänzer <michel.daenzer@....com>

Reviewed-by: Christian König <christian.koenig@....com>.

Regards,
Christian.

> ---
>   drivers/dma-buf/reservation.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
> index 314eb1071cce..532545b9488e 100644
> --- a/drivers/dma-buf/reservation.c
> +++ b/drivers/dma-buf/reservation.c
> @@ -141,6 +141,7 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
>   	if (signaled) {
>   		RCU_INIT_POINTER(fobj->shared[signaled_idx], fence);
>   	} else {
> +		BUG_ON(fobj->shared_count >= fobj->shared_max);
>   		RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
>   		fobj->shared_count++;
>   	}
> @@ -230,10 +231,9 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
>   	old = reservation_object_get_list(obj);
>   	obj->staged = NULL;
>   
> -	if (!fobj) {
> -		BUG_ON(old->shared_count >= old->shared_max);
> +	if (!fobj)
>   		reservation_object_add_shared_inplace(obj, old, fence);
> -	} else
> +	else
>   		reservation_object_add_shared_replace(obj, old, fobj, fence);
>   }
>   EXPORT_SYMBOL(reservation_object_add_shared_fence);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ