[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aece4f83-7caf-4c58-9e00-c92500ec9105@arm.com>
Date: Fri, 19 Dec 2025 10:19:26 +0100
From: Ketil Johnsen <ketil.johnsen@....com>
To: Boris Brezillon <boris.brezillon@...labora.com>,
Steven Price <steven.price@....com>
Cc: Liviu Dudau <liviu.dudau@....com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
Grant Likely <grant.likely@...aro.org>, Heiko Stuebner <heiko@...ech.de>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] drm/panthor: Evict groups before VM termination
On 12/18/25 18:59, Boris Brezillon wrote:
> On Thu, 18 Dec 2025 16:57:28 +0000
> Steven Price <steven.price@....com> wrote:
>
>> On 18/12/2025 16:26, Ketil Johnsen wrote:
>>> Ensure all related groups are evicted and suspended before VM
>>> destruction takes place.
>>>
>>> This fixes an issue where panthor_vm_destroy() destroys and unmaps the
>>> heap context while there are still on slot groups using this.
>>> The FW will do a write out to the heap context when a CSG (group) is
>>> suspended, so a premature unmap of the heap context will cause a
>>> GPU page fault.
>>> This page fault is quite harmless, and do not affect the continued
>>> operation of the GPU.
>>>
>>> Fixes: 647810ec2476 ("drm/panthor: Add the MMU/VM logical block")
>>> Co-developed-by: Boris Brezillon <boris.brezillon@...labora.com>
>>> Signed-off-by: Ketil Johnsen <ketil.johnsen@....com>
>>> ---
>>> drivers/gpu/drm/panthor/panthor_mmu.c | 4 ++++
>>> drivers/gpu/drm/panthor/panthor_sched.c | 16 ++++++++++++++++
>>> drivers/gpu/drm/panthor/panthor_sched.h | 1 +
>>> 3 files changed, 21 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
>>> index 74230f7199121..0e4b301a9c70e 100644
>>> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
>>> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
>>> @@ -1537,6 +1537,10 @@ static void panthor_vm_destroy(struct panthor_vm *vm)
>>>
>>> vm->destroyed = true;
>>>
>>> + /* Tell scheduler to stop all GPU work related to this VM */
>>> + if (refcount_read(&vm->as.active_cnt) > 0)
>>> + panthor_sched_prepare_for_vm_destruction(vm->ptdev);
>>> +
>>> mutex_lock(&vm->heaps.lock);
>>> panthor_heap_pool_destroy(vm->heaps.pool);
>>> vm->heaps.pool = NULL;
>>> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
>>> index f680edcd40aad..fbbaab9b25efb 100644
>>> --- a/drivers/gpu/drm/panthor/panthor_sched.c
>>> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
>>> @@ -2930,6 +2930,22 @@ void panthor_sched_report_mmu_fault(struct panthor_device *ptdev)
>>> sched_queue_delayed_work(ptdev->scheduler, tick, 0);
>>> }
>>>
>>> +void panthor_sched_prepare_for_vm_destruction(struct panthor_device *ptdev)
>>> +{
>>> + /* FW can write out internal state, like the heap context, during CSG
>>> + * suspend. It is therefore important that the scheduler has fully
>>> + * evicted any pending and related groups before VM destruction can
>>> + * safely continue. Failure to do so can lead to GPU page faults.
>>> + * A controlled termination of a Panthor instance involves destroying
>>> + * the group(s) before the VM. This means any relevant group eviction
>>> + * has already been initiated by this point, and we just need to
>>> + * ensure that any pending tick_work() has been completed.
>>> + */
>>> + if (ptdev->scheduler) {
>>> + flush_work(&ptdev->scheduler->tick_work.work);
>>> + }
>>
>> NIT: braces not needed.
>>
>> But I'm also struggling to understand in what situation ptdev->scheduler
>> would be NULL?
>
> I thought it could happen if the FW initialization fails in the middle,
> and the FW VM is destroyed before the scheduler had a chance to
> initialize, but it turns out the FW logic never calls
> panthor_vm_destroy().
Yes, I also think we can safely drop the check. I even injected some
probe errors to double check, and I see that we still terminate the FW
VM successfully (as this is not executed in this case).
I will send a v2 shortly with this check (and braces) removed.
--
Thanks,
Ketil
>> Thanks,
>> Steve
>>
>>> +}
>>> +
>>> void panthor_sched_resume(struct panthor_device *ptdev)
>>> {
>>> /* Force a tick to re-evaluate after a resume. */
>>> diff --git a/drivers/gpu/drm/panthor/panthor_sched.h b/drivers/gpu/drm/panthor/panthor_sched.h
>>> index f4a475aa34c0a..9a8692de8aded 100644
>>> --- a/drivers/gpu/drm/panthor/panthor_sched.h
>>> +++ b/drivers/gpu/drm/panthor/panthor_sched.h
>>> @@ -50,6 +50,7 @@ void panthor_sched_suspend(struct panthor_device *ptdev);
>>> void panthor_sched_resume(struct panthor_device *ptdev);
>>>
>>> void panthor_sched_report_mmu_fault(struct panthor_device *ptdev);
>>> +void panthor_sched_prepare_for_vm_destruction(struct panthor_device *ptdev);
>>> void panthor_sched_report_fw_events(struct panthor_device *ptdev, u32 events);
>>>
>>> void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile);
>>
>
Powered by blists - more mailing lists