[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20d75e2c-c5a5-48c3-ac99-a9e15f19b872@arm.com>
Date: Wed, 6 Nov 2024 13:17:29 +0000
From: Steven Price <steven.price@....com>
To: Liviu Dudau <liviu.dudau@....com>,
Boris Brezillon <boris.brezillon@...labora.com>
Cc: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Jann Horn <jannh@...gle.com>
Subject: Re: [PATCH] drm/panthor: Lock XArray when getting entries for heap
and VM
On 06/11/2024 12:07, Liviu Dudau wrote:
> Similar to cac075706f29 ("drm/panthor: Fix race when converting
> group handle to group object") we need to use the XArray's internal
> locking when retrieving a pointer from there for heap and vm.
>
> Reported-by: Jann Horn <jannh@...gle.com>
> Cc: Boris Brezillon <boris.brezillon@...labora.com>
> Cc: Steven Price <steven.price@....com>
> Signed-off-by: Liviu Dudau <liviu.dudau@....com>
> ---
> drivers/gpu/drm/panthor/panthor_heap.c | 15 +++++++++++++--
> drivers/gpu/drm/panthor/panthor_mmu.c | 2 ++
> 2 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c
> index 3796a9eb22af2..fe0bcb6837f74 100644
> --- a/drivers/gpu/drm/panthor/panthor_heap.c
> +++ b/drivers/gpu/drm/panthor/panthor_heap.c
> @@ -351,6 +351,17 @@ int panthor_heap_create(struct panthor_heap_pool *pool,
> return ret;
> }
>
> +static struct panthor_heap *panthor_heap_from_id(struct pathor_heap_pool *pool, u32 id)
> +{
> + struct panthor_heap *heap;
> +
> + xa_lock(&pool->xa);
> + heap = xa_load(&pool->xa, id);
> + xa_unlock(&pool->va);
> +
> + return heap;
> +}
This locking doesn't actually achieve anything - XArray already deals
with the concurrency (with RCU), and if we're doing nothing more than an
xa_load() then we don't need (extra) locking (unless using the __
prefixed functions).
And, as Boris has pointed out, pool->lock is held. As you mention in
your email the missing bit might be panthor_heap_pool_release() - if
it's not holding a lock then the heap could be freed immediately after
panthor_heap_from_id() returns (even with the above change).
Steve
> +
> /**
> * panthor_heap_return_chunk() - Return an unused heap chunk
> * @pool: The pool this heap belongs to.
> @@ -375,7 +386,7 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool,
> return -EINVAL;
>
> down_read(&pool->lock);
> - heap = xa_load(&pool->xa, heap_id);
> + heap = panthor_heap_from_id(pool, heap_id);
> if (!heap) {
> ret = -EINVAL;
> goto out_unlock;
> @@ -438,7 +449,7 @@ int panthor_heap_grow(struct panthor_heap_pool *pool,
> return -EINVAL;
>
> down_read(&pool->lock);
> - heap = xa_load(&pool->xa, heap_id);
> + heap = panthor_heap_from_id(pool, heap_id);
> if (!heap) {
> ret = -EINVAL;
> goto out_unlock;
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 8ca85526491e6..8b5cda9d21768 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -1580,7 +1580,9 @@ panthor_vm_pool_get_vm(struct panthor_vm_pool *pool, u32 handle)
> {
> struct panthor_vm *vm;
>
> + xa_lock(&pool->xa);
> vm = panthor_vm_get(xa_load(&pool->xa, handle));
> + xa_unlock(&pool->va);
>
> return vm;
> }
Powered by blists - more mailing lists