lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPaKu7QEAbR8a_+qmyU=obyf2N-UZemfw23U_Dw2DZLqPd7tGQ@mail.gmail.com>
Date: Thu, 2 Oct 2025 17:23:06 -0700
From: Chia-I Wu <olvaffe@...il.com>
To: Steven Price <steven.price@....com>
Cc: Boris Brezillon <boris.brezillon@...labora.com>, Liviu Dudau <liviu.dudau@....com>, 
	Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, Maxime Ripard <mripard@...nel.org>, 
	Thomas Zimmermann <tzimmermann@...e.de>, David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, 
	Grant Likely <grant.likely@...aro.org>, Heiko Stuebner <heiko@...ech.de>, 
	dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 07/10] drm/panthor: remove unnecessary mmu_hw_wait_ready calls

On Thu, Oct 2, 2025 at 3:41 AM Steven Price <steven.price@....com> wrote:
>
> On 16/09/2025 22:08, Chia-I Wu wrote:
> > No need to call mmu_hw_wait_ready after panthor_gpu_flush_caches or
> > before returning from mmu_hw_flush_caches.
>
> Why is there no need? If we attempt to send a command when the hardware
> is busy then the command will be dropped (so the cache flush won't
> happen), and if we don't wait for the unlock command to complete then
> then we don't know that the flush is complete.
We have this sequence of calls

  mmu_hw_wait_ready
  panthor_gpu_flush_caches
  mmu_hw_wait_ready
  mmu_hw_cmd_unlock
  mmu_hw_wait_ready

I could be utterly wrong, but my assumption was that
panthor_gpu_flush_caches does not cause AS_STATUS_AS_ACTIVE, at least
by the time it returns. That's why I removed the second wait.

We also always wait before issuing a cmd. Removing the last wait here
avoids double waits for panthor_mmu_as_{enable,disable}. It does leave
the cmd in flight when panthor_vm_flush_range returns, but whoever
issues a new cmd will wait on the flush.



>
> Thanks,
> Steve
>
> > Signed-off-by: Chia-I Wu <olvaffe@...il.com>
> > ---
> >  drivers/gpu/drm/panthor/panthor_mmu.c | 7 ++-----
> >  1 file changed, 2 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> > index 373871aeea9f4..c223e3fadf92e 100644
> > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > @@ -669,12 +669,9 @@ static int mmu_hw_flush_caches(struct panthor_device *ptdev, int as_nr, u64 iova
> >        * at the end of the GPU_CONTROL cache flush command, unlike
> >        * AS_COMMAND_FLUSH_MEM or AS_COMMAND_FLUSH_PT.
> >        */
> > -     ret = mmu_hw_wait_ready(ptdev, as_nr);
> > -     if (!ret)
> > -             mmu_hw_cmd_unlock(ptdev, as_nr);
> > +     mmu_hw_cmd_unlock(ptdev, as_nr);
> >
> > -     /* Wait for the unlock command to complete */
> > -     return mmu_hw_wait_ready(ptdev, as_nr);
> > +     return 0;
> >  }
> >
> >  static int mmu_hw_do_operation(struct panthor_vm *vm,
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ