[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9022445f-3cf7-46a8-85ac-e1f226e0bd9b@arm.com>
Date: Fri, 3 Oct 2025 15:13:21 +0100
From: Steven Price <steven.price@....com>
To: Chia-I Wu <olvaffe@...il.com>
Cc: Boris Brezillon <boris.brezillon@...labora.com>,
Liviu Dudau <liviu.dudau@....com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Grant Likely <grant.likely@...aro.org>, Heiko Stuebner <heiko@...ech.de>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 07/10] drm/panthor: remove unnecessary mmu_hw_wait_ready
calls
On 03/10/2025 01:23, Chia-I Wu wrote:
> On Thu, Oct 2, 2025 at 3:41 AM Steven Price <steven.price@....com> wrote:
>>
>> On 16/09/2025 22:08, Chia-I Wu wrote:
>>> No need to call mmu_hw_wait_ready after panthor_gpu_flush_caches or
>>> before returning from mmu_hw_flush_caches.
>>
>> Why is there no need? If we attempt to send a command when the hardware
>> is busy then the command will be dropped (so the cache flush won't
>> happen), and if we don't wait for the unlock command to complete then
>> then we don't know that the flush is complete.
> We have this sequence of calls
>
> mmu_hw_wait_ready
> panthor_gpu_flush_caches
> mmu_hw_wait_ready
> mmu_hw_cmd_unlock
> mmu_hw_wait_ready
>
> I could be utterly wrong, but my assumption was that
> panthor_gpu_flush_caches does not cause AS_STATUS_AS_ACTIVE, at least
> by the time it returns. That's why I removed the second wait.
Hmm, so this was a recent change, moving away from FLUSH_MEM/FLUSH_PT. I
have to admit the spec implies that it a FLUSH_CACHES command wouldn't
set the AS_ACTIVE bit.
Indeed we now actually split the active bit between AS_ACTIVE_EXT and
AS_ACTIVE_INT - where _INT is from an "internal source" and therefore
doesn't prevent writing to the COMMAND register.
We do, however, need the LOCK command to have completed before we flush
the caches. So the operations should be:
* wait_ready()
* LOCK
* wait_ready() // To check that the LOCK has completed
* FLUSH_CACHES
* UNLOCK
* wait_ready() // Optional
The final wait_ready() is optional in some cases (because the LOCK
ensures that we can't have any translations using the old TLB entries -
note that I believe older Midgard GPUs couldn't rely on this). However
in the case where we want to disable a MMU we're going to have to wait.
> We also always wait before issuing a cmd. Removing the last wait here
> avoids double waits for panthor_mmu_as_{enable,disable}. It does leave
> the cmd in flight when panthor_vm_flush_range returns, but whoever
> issues a new cmd will wait on the flush.
Note that wait_ready() is really cheap - it's a single GPU register read
if there's nothing active. So the "double wait" isn't really a problem.
I'd much rather have the occasional double wait (i.e. one extra register
read) than the situation where we miss a wait_ready() and end up with an
MMU command being dropped by the hardware.
Thanks,
Steve
>
>
>>
>> Thanks,
>> Steve
>>
>>> Signed-off-by: Chia-I Wu <olvaffe@...il.com>
>>> ---
>>> drivers/gpu/drm/panthor/panthor_mmu.c | 7 ++-----
>>> 1 file changed, 2 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
>>> index 373871aeea9f4..c223e3fadf92e 100644
>>> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
>>> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
>>> @@ -669,12 +669,9 @@ static int mmu_hw_flush_caches(struct panthor_device *ptdev, int as_nr, u64 iova
>>> * at the end of the GPU_CONTROL cache flush command, unlike
>>> * AS_COMMAND_FLUSH_MEM or AS_COMMAND_FLUSH_PT.
>>> */
>>> - ret = mmu_hw_wait_ready(ptdev, as_nr);
>>> - if (!ret)
>>> - mmu_hw_cmd_unlock(ptdev, as_nr);
>>> + mmu_hw_cmd_unlock(ptdev, as_nr);
>>>
>>> - /* Wait for the unlock command to complete */
>>> - return mmu_hw_wait_ready(ptdev, as_nr);
>>> + return 0;
>>> }
>>>
>>> static int mmu_hw_do_operation(struct panthor_vm *vm,
>>
Powered by blists - more mailing lists