[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f750ab62-7deb-21a1-753e-1ee838386265@amd.com>
Date: Fri, 12 Aug 2022 13:12:59 +0200
From: Christian König <christian.koenig@....com>
To: Andrey Grodzovsky <andrey.grodzovsky@....com>,
Andrey Strachuk <strochuk@...ras.ru>,
Alex Deucher <alexander.deucher@....com>
Cc: "Pan, Xinhui" <Xinhui.Pan@....com>,
David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>, Emma Anholt <emma@...olt.net>,
Melissa Wen <mwen@...lia.com>,
Guchun Chen <guchun.chen@....com>,
Surbhi Kakarya <surbhi.kakarya@....com>,
Jack Zhang <Jack.Zhang1@....com>,
Hawking Zhang <Hawking.Zhang@....com>,
Felix Kuehling <Felix.Kuehling@....com>,
amd-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, ldv-project@...uxtesting.org
Subject: Re: [PATCH] drm/amdgpu: remove useless condition in
amdgpu_job_stop_all_jobs_on_sched()
@Alex was that one already picked up?
Am 25.07.22 um 18:40 schrieb Andrey Grodzovsky:
> Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@....com>
>
> Andrey
>
> On 2022-07-19 06:39, Andrey Strachuk wrote:
>> Local variable 'rq' is initialized by an address
>> of field of drm_sched_job, so it does not make
>> sense to compare 'rq' with NULL.
>>
>> Found by Linux Verification Center (linuxtesting.org) with SVACE.
>>
>> Signed-off-by: Andrey Strachuk <strochuk@...ras.ru>
>> Fixes: 7c6e68c777f1 ("drm/amdgpu: Avoid HW GPU reset for RAS.")
>> ---
>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 ----
>> 1 file changed, 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index 67f66f2f1809..600401f2a98f 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -285,10 +285,6 @@ void amdgpu_job_stop_all_jobs_on_sched(struct
>> drm_gpu_scheduler *sched)
>> /* Signal all jobs not yet scheduled */
>> for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >=
>> DRM_SCHED_PRIORITY_MIN; i--) {
>> struct drm_sched_rq *rq = &sched->sched_rq[i];
>> -
>> - if (!rq)
>> - continue;
>> -
>> spin_lock(&rq->lock);
>> list_for_each_entry(s_entity, &rq->entities, list) {
>> while ((s_job =
>> to_drm_sched_job(spsc_queue_pop(&s_entity->job_queue)))) {
Powered by blists - more mailing lists