lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <08bc7f37-d2d7-4ad0-9575-f8a2c36b1c3f@ursulin.net>
Date: Fri, 31 Oct 2025 12:31:33 +0000
From: Tvrtko Ursulin <tursulin@...ulin.net>
To: Christian König <christian.koenig@....com>,
 Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@....com>,
 Matthew Brost <matthew.brost@...el.com>, Danilo Krummrich <dakr@...nel.org>,
 Philipp Stanner <phasta@...nel.org>,
 Christian König <ckoenig.leichtzumerken@...il.com>,
 Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
 Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
 David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
 Sumit Semwal <sumit.semwal@...aro.org>
Cc: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>,
 dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
 linux-media@...r.kernel.org, linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH v2] drm/sched: Fix deadlock in
 drm_sched_entity_kill_jobs_cb


On 31/10/2025 12:25, Christian König wrote:
> 
> 
> On 10/31/25 12:50, Tvrtko Ursulin wrote:
>>
>> On 31/10/2025 09:07, Pierre-Eric Pelloux-Prayer wrote:
>>> The Mesa issue referenced below pointed out a possible deadlock:
>>>
>>> [ 1231.611031]  Possible interrupt unsafe locking scenario:
>>>
>>> [ 1231.611033]        CPU0                    CPU1
>>> [ 1231.611034]        ----                    ----
>>> [ 1231.611035]   lock(&xa->xa_lock#17);
>>> [ 1231.611038]                                local_irq_disable();
>>> [ 1231.611039]                                lock(&fence->lock);
>>> [ 1231.611041]                                lock(&xa->xa_lock#17);
>>> [ 1231.611044]   <Interrupt>
>>> [ 1231.611045]     lock(&fence->lock);
>>> [ 1231.611047]
>>>                   *** DEADLOCK ***
>>>
>>> In this example, CPU0 would be any function accessing job->dependencies
>>> through the xa_* functions that doesn't disable interrupts (eg:
>>> drm_sched_job_add_dependency, drm_sched_entity_kill_jobs_cb).
>>>
>>> CPU1 is executing drm_sched_entity_kill_jobs_cb as a fence signalling
>>> callback so in an interrupt context. It will deadlock when trying to
>>> grab the xa_lock which is already held by CPU0.
>>>
>>> Replacing all xa_* usage by their xa_*_irq counterparts would fix
>>> this issue, but Christian pointed out another issue: dma_fence_signal
>>> takes fence.lock and so does dma_fence_add_callback.
>>>
>>>     dma_fence_signal() // locks f1.lock
>>>     -> drm_sched_entity_kill_jobs_cb()
>>>     -> foreach dependencies
>>>        -> dma_fence_add_callback() // locks f2.lock
>>>
>>> This will deadlock if f1 and f2 share the same spinlock.
>>
>> Is it possible to hit this case?
>>
>> Same lock means same execution timeline
> 
> Nope, exactly that is incorrect. It's completely up to the implementation what they use this lock for.

Yes, sorry, I got confused for a moment. The lock can be per hw 
scheduler while execution timeline is per entity.

Regards,

Tvrtko

> 
>> , which should mean dependency should have been squashed in drm_sched_job_add_dependency(), no?
> 
> This makes it less likely, but not impossible to trigger.
> 
> Regards,
> Christian.
> 
>>
>> Or would sharing the lock but not sharing the entity->fence_context be considered legal? It would be surprising at least.
>>
>> Also, would anyone have time to add a kunit test? ;)
>>
>> Regards,
>>
>> Tvrtko
>>
>>> To fix both issues, the code iterating on dependencies and re-arming them
>>> is moved out to drm_sched_entity_kill_jobs_work.
>>>
>>> Link: https://gitlab.freedesktop.org/mesa/mesa/-/issues/13908
>>> Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>
>>> Suggested-by: Christian König <christian.koenig@....com>
>>> Reviewed-by: Christian König <christian.koenig@....com>
>>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@....com>
>>> ---
>>>    drivers/gpu/drm/scheduler/sched_entity.c | 34 +++++++++++++-----------
>>>    1 file changed, 19 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
>>> index c8e949f4a568..fe174a4857be 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>> @@ -173,26 +173,15 @@ int drm_sched_entity_error(struct drm_sched_entity *entity)
>>>    }
>>>    EXPORT_SYMBOL(drm_sched_entity_error);
>>>    +static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
>>> +                      struct dma_fence_cb *cb);
>>> +
>>>    static void drm_sched_entity_kill_jobs_work(struct work_struct *wrk)
>>>    {
>>>        struct drm_sched_job *job = container_of(wrk, typeof(*job), work);
>>> -
>>> -    drm_sched_fence_scheduled(job->s_fence, NULL);
>>> -    drm_sched_fence_finished(job->s_fence, -ESRCH);
>>> -    WARN_ON(job->s_fence->parent);
>>> -    job->sched->ops->free_job(job);
>>> -}
>>> -
>>> -/* Signal the scheduler finished fence when the entity in question is killed. */
>>> -static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
>>> -                      struct dma_fence_cb *cb)
>>> -{
>>> -    struct drm_sched_job *job = container_of(cb, struct drm_sched_job,
>>> -                         finish_cb);
>>> +    struct dma_fence *f;
>>>        unsigned long index;
>>>    -    dma_fence_put(f);
>>> -
>>>        /* Wait for all dependencies to avoid data corruptions */
>>>        xa_for_each(&job->dependencies, index, f) {
>>>            struct drm_sched_fence *s_fence = to_drm_sched_fence(f);
>>> @@ -220,6 +209,21 @@ static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
>>>            dma_fence_put(f);
>>>        }
>>>    +    drm_sched_fence_scheduled(job->s_fence, NULL);
>>> +    drm_sched_fence_finished(job->s_fence, -ESRCH);
>>> +    WARN_ON(job->s_fence->parent);
>>> +    job->sched->ops->free_job(job);
>>> +}
>>> +
>>> +/* Signal the scheduler finished fence when the entity in question is killed. */
>>> +static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f,
>>> +                      struct dma_fence_cb *cb)
>>> +{
>>> +    struct drm_sched_job *job = container_of(cb, struct drm_sched_job,
>>> +                         finish_cb);
>>> +
>>> +    dma_fence_put(f);
>>> +
>>>        INIT_WORK(&job->work, drm_sched_entity_kill_jobs_work);
>>>        schedule_work(&job->work);
>>>    }
>>
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ