[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZytqzTg2QnEkvDcl@e110455-lin.cambridge.arm.com>
Date: Wed, 6 Nov 2024 13:10:37 +0000
From: Liviu Dudau <liviu.dudau@....com>
To: Boris Brezillon <boris.brezillon@...labora.com>
Cc: Steven Price <steven.price@....com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Jann Horn <jannh@...gle.com>
Subject: Re: [PATCH] drm/panthor: Lock XArray when getting entries for heap
and VM
On Wed, Nov 06, 2024 at 01:16:41PM +0100, Boris Brezillon wrote:
> On Wed, 6 Nov 2024 12:07:48 +0000
> Liviu Dudau <liviu.dudau@....com> wrote:
>
> > Similar to cac075706f29 ("drm/panthor: Fix race when converting
> > group handle to group object") we need to use the XArray's internal
> > locking when retrieving a pointer from there for heap and vm.
> >
> > Reported-by: Jann Horn <jannh@...gle.com>
> > Cc: Boris Brezillon <boris.brezillon@...labora.com>
> > Cc: Steven Price <steven.price@....com>
> > Signed-off-by: Liviu Dudau <liviu.dudau@....com>
>From the other email: I will add the Fixes tag for next version.
> > ---
> > drivers/gpu/drm/panthor/panthor_heap.c | 15 +++++++++++++--
> > drivers/gpu/drm/panthor/panthor_mmu.c | 2 ++
> > 2 files changed, 15 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c
> > index 3796a9eb22af2..fe0bcb6837f74 100644
> > --- a/drivers/gpu/drm/panthor/panthor_heap.c
> > +++ b/drivers/gpu/drm/panthor/panthor_heap.c
> > @@ -351,6 +351,17 @@ int panthor_heap_create(struct panthor_heap_pool *pool,
> > return ret;
> > }
> >
> > +static struct panthor_heap *panthor_heap_from_id(struct pathor_heap_pool *pool, u32 id)
>
> struct pathor_heap_pool does not exist :-).
Oops! Turns out that for my "compile testing" of the patch on my Arm machine I was using
the wrong O= directory so my .config did not had Panthor enabled.
>
> > +{
> > + struct panthor_heap *heap;
> > +
> > + xa_lock(&pool->xa);
> > + heap = xa_load(&pool->xa, id);
> > + xa_unlock(&pool->va);
>
> Access to panthor_heap_pool::xa is protected by the external
> pathor_heap_pool::lock, so taking the xa lock seems redundant here. How
> about adding a lockdep_assert_held(pool->lock) instead?
panthor_heap_pool_release() does not take the panthor_heap_pool::lock, so the protection
is not really there. I could fix panthor_heap_pool_release() and then add a
lockdep_assert_held() before both calls to xa_load() if you think that's a better
solution.
Best regards,
Liviu
>
> > +
> > + return heap;
> > +}
> > +
> > /**
> > * panthor_heap_return_chunk() - Return an unused heap chunk
> > * @pool: The pool this heap belongs to.
> > @@ -375,7 +386,7 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool,
> > return -EINVAL;
> >
> > down_read(&pool->lock);
> > - heap = xa_load(&pool->xa, heap_id);
> > + heap = panthor_heap_from_id(pool, heap_id);
> > if (!heap) {
> > ret = -EINVAL;
> > goto out_unlock;
> > @@ -438,7 +449,7 @@ int panthor_heap_grow(struct panthor_heap_pool *pool,
> > return -EINVAL;
> >
> > down_read(&pool->lock);
> > - heap = xa_load(&pool->xa, heap_id);
> > + heap = panthor_heap_from_id(pool, heap_id);
> > if (!heap) {
> > ret = -EINVAL;
> > goto out_unlock;
> > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> > index 8ca85526491e6..8b5cda9d21768 100644
> > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > @@ -1580,7 +1580,9 @@ panthor_vm_pool_get_vm(struct panthor_vm_pool *pool, u32 handle)
> > {
> > struct panthor_vm *vm;
> >
> > + xa_lock(&pool->xa);
> > vm = panthor_vm_get(xa_load(&pool->xa, handle));
> > + xa_unlock(&pool->va);
> >
> > return vm;
> > }
>
--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯
Powered by blists - more mailing lists