[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPaKu7RKDwpSqJ6u8mjcc4G0Z-T7G1LxFw2rXQtxgSW=1_-jkw@mail.gmail.com>
Date: Thu, 2 Oct 2025 17:31:27 -0700
From: Chia-I Wu <olvaffe@...il.com>
To: Steven Price <steven.price@....com>
Cc: Boris Brezillon <boris.brezillon@...labora.com>, Liviu Dudau <liviu.dudau@....com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>, David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Grant Likely <grant.likely@...aro.org>, Heiko Stuebner <heiko@...ech.de>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 05/10] drm/panthor: rename and document mmu_hw_do_operation_locked
On Thu, Oct 2, 2025 at 3:41 AM Steven Price <steven.price@....com> wrote:
>
> On 16/09/2025 22:08, Chia-I Wu wrote:
> > Rename mmu_hw_do_operation_locked to mmu_hw_flush_caches.
>
> This is confusing, you've renamed the _locked variant and left the
> wrapper mmu_hw_do_operation() with the old name.
The commit message says "rename and document", and I try to stay true
to it. I could certainly squash some of the commits to make this
series less confusing.
>
> I agree "do operation" isn't a great name, although "flush caches"
> sounds to me like it's a function which does the whole cache flush dance
> in one go, but it's still the same "one part of a cache flush operation"
> code.
It gets the name from being a wrapper for panthor_gpu_flush_caches.
Which part of "cache flush operation" is missing?
>
> Thanks,
> Steve
>
> >
> > Signed-off-by: Chia-I Wu <olvaffe@...il.com>
> > ---
> > drivers/gpu/drm/panthor/panthor_mmu.c | 22 +++++++++++++++++-----
> > 1 file changed, 17 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> > index 727339d80d37e..7d1645a24129d 100644
> > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > @@ -622,8 +622,20 @@ static void mmu_hw_cmd_unlock(struct panthor_device *ptdev, u32 as_nr)
> > write_cmd(ptdev, as_nr, AS_COMMAND_UNLOCK);
> > }
> >
> > -static int mmu_hw_do_operation_locked(struct panthor_device *ptdev, int as_nr,
> > - u64 iova, u64 size, u32 op)
> > +/**
> > + * mmu_hw_cmd_flush_caches() - Flush and invalidate L2/MMU/LSC caches
> > + * @ptdev: Device.
> > + * @as_nr: AS to issue command to.
> > + * @iova: Start of the region.
> > + * @size: Size of the region.
> > + * @op: AS_COMMAND_FLUSH_*
> > + *
> > + * Issue LOCK/GPU_FLUSH_CACHES/UNLOCK commands in order to flush and
> > + * invalidate L2/MMU/LSC caches for a region.
> > + *
> > + * Return: 0 on success, a negative error code otherwise.
> > + */
> > +static int mmu_hw_flush_caches(struct panthor_device *ptdev, int as_nr, u64 iova, u64 size, u32 op)
> > {
> > const u32 l2_flush_op = CACHE_CLEAN | CACHE_INV;
> > u32 lsc_flush_op;
> > @@ -680,7 +692,7 @@ static int mmu_hw_do_operation(struct panthor_vm *vm,
> > int ret;
> >
> > mutex_lock(&ptdev->mmu->as.slots_lock);
> > - ret = mmu_hw_do_operation_locked(ptdev, vm->as.id, iova, size, op);
> > + ret = mmu_hw_flush_caches(ptdev, vm->as.id, iova, size, op);
> > mutex_unlock(&ptdev->mmu->as.slots_lock);
> >
> > return ret;
> > @@ -691,7 +703,7 @@ static int panthor_mmu_as_enable(struct panthor_device *ptdev, u32 as_nr,
> > {
> > int ret;
> >
> > - ret = mmu_hw_do_operation_locked(ptdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
> > + ret = mmu_hw_flush_caches(ptdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
> > if (ret)
> > return ret;
> >
> > @@ -702,7 +714,7 @@ static int panthor_mmu_as_disable(struct panthor_device *ptdev, u32 as_nr)
> > {
> > int ret;
> >
> > - ret = mmu_hw_do_operation_locked(ptdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
> > + ret = mmu_hw_flush_caches(ptdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM);
> > if (ret)
> > return ret;
> >
>
Powered by blists - more mailing lists