[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fafb6cbb-67d5-f55f-5fb3-ea159683904e@nfschina.com>
Date: Thu, 14 Nov 2024 16:13:19 +0800
From: Su Hui <suhui@...china.com>
To: "Vivekanandan, Balasubramani" <balasubramani.vivekanandan@...el.com>,
lucas.demarchi@...el.com, thomas.hellstrom@...ux.intel.com,
rodrigo.vivi@...el.com, maarten.lankhorst@...ux.intel.com,
mripard@...nel.org, tzimmermann@...e.de, airlied@...il.com, simona@...ll.ch,
nathan@...nel.org, ndesaulniers@...gle.com, morbo@...gle.com,
justinstitt@...gle.com
Cc: matthew.brost@...el.com, francois.dugast@...el.com,
intel-xe@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, llvm@...ts.linux.dev,
kernel-janitors@...r.kernel.org
Subject: Re: [PATCH] drm/xe/hw_engine_group: Fix bad free in
xe_hw_engine_setup_groups()
On 2024/11/14 15:45, Vivekanandan, Balasubramani wrote:
> On 14.11.2024 14:39, Su Hui wrote:
>>
>> diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c b/drivers/gpu/drm/xe/xe_hw_engine_group.c
>> index 82750520a90a..ee2cb32817fa 100644
>> --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
>> +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
>> @@ -51,7 +51,7 @@ static struct xe_hw_engine_group *
>> hw_engine_group_alloc(struct xe_device *xe)
>> {
>> struct xe_hw_engine_group *group;
>> - int err;
>> + int err = -ENOMEM;
>>
>> group = kzalloc(sizeof(*group), GFP_KERNEL);
>> if (!group)
>> @@ -59,7 +59,7 @@ hw_engine_group_alloc(struct xe_device *xe)
>>
>> group->resume_wq = alloc_workqueue("xe-resume-lr-jobs-wq", 0, 0);
>> if (!group->resume_wq)
>> - return ERR_PTR(-ENOMEM);
>> + goto free_group;
> kfree can be directly called from here followed by return, instead of a
> goto.
Agreed.
>>
>> init_rwsem(&group->mode_sem);
>> INIT_WORK(&group->resume_work, hw_engine_group_resume_lr_jobs_func);
>> @@ -67,9 +67,15 @@ hw_engine_group_alloc(struct xe_device *xe)
>>
>> err = drmm_add_action_or_reset(&xe->drm, hw_engine_group_free, group);
>> if (err)
>> - return ERR_PTR(err);
>> + goto destroy_wq;
> There is no need to clear the resources on failure, because
> drmm_add_action_or_reset takes care of freeing the resources on
> failure.
Oh, my fault, I missed this function.
>>
>>
>> /**
>> @@ -87,21 +93,19 @@ int xe_hw_engine_setup_groups(struct xe_gt *gt)
>> int err;
>>
>> group_rcs_ccs = hw_engine_group_alloc(xe);
>> - if (IS_ERR(group_rcs_ccs)) {
>> - err = PTR_ERR(group_rcs_ccs);
>> - goto err_group_rcs_ccs;
>> - }
>> + if (IS_ERR(group_rcs_ccs))
>> + return PTR_ERR(group_rcs_ccs);
>>
>> group_bcs = hw_engine_group_alloc(xe);
>> if (IS_ERR(group_bcs)) {
>> err = PTR_ERR(group_bcs);
>> - goto err_group_bcs;
>> + goto free_group_rcs_ccs;
> No need of freeing the memory here and in the following lines as we have
> managed it through the drmm_add_action_or_reset call in
> hw_engine_group_alloc.
> We can simply return the error code.
Got it.
>
>>
>> -err_group_vcs_vecs:
>> - kfree(group_vcs_vecs);
>> -err_group_bcs:
>> +free_group_bcs:
>> + destroy_workqueue(group_bcs->resume_wq);
>> kfree(group_bcs);
>> -err_group_rcs_ccs:
>> +free_group_rcs_ccs:
>> + destroy_workqueue(group_rcs_ccs->resume_wq);
>> kfree(group_rcs_ccs);
>> -
> All these kfree statements are not required.
Agreed too. Thanks for your review.
I will send a v2 patch to remove these kfree if there are no further
suggestions.
Regards,
Su Hui
Powered by blists - more mailing lists