[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <163864977875.3153335.18099399866051099554@Monstersaurus>
Date: Sat, 04 Dec 2021 20:29:38 +0000
From: Kieran Bingham <kieran.bingham@...asonboard.com>
To: Ameer Hamza <amhamza.mgc@...il.com>, agross@...nel.org,
bjorn.andersson@...aro.org, mchehab@...nel.org,
stanimir.varbanov@...aro.org
Cc: linux-media@...r.kernel.org, linux-arm-msm@...r.kernel.org,
linux-kernel@...r.kernel.org, amhamza.mgc@...il.com
Subject: Re: [PATCH] media: venus: vdec: fixed possible memory leak issue
Hi Ameer,
Quoting Ameer Hamza (2021-12-04 12:11:23)
> Fixed coverity warning by freeing the allocated memory before return
>
> Addresses-Coverity: 1494120 ("Resource leak")
>
> Signed-off-by: Ameer Hamza <amhamza.mgc@...il.com>
> ---
> drivers/media/platform/qcom/venus/helpers.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
> index 84c3a511ec31..344a42853898 100644
> --- a/drivers/media/platform/qcom/venus/helpers.c
> +++ b/drivers/media/platform/qcom/venus/helpers.c
> @@ -197,6 +197,7 @@ int venus_helper_alloc_dpb_bufs(struct venus_inst *inst)
>
> id = ida_alloc_min(&inst->dpb_ids, VB2_MAX_FRAME, GFP_KERNEL);
> if (id < 0) {
> + kfree(buf);
> ret = id;
> goto fail;
Indeed, this is definitely a leak here.
Normally I think resources would be cleaned up in the fail path in a
situation like this.
That would then make sure that all paths out of this loop will free on
error.
If buf is null, kfree(null) is a valid noop call, so it will not
adversely affect the kzalloc() fail path.
Given that, I would suspect that a cleaner fix is to move the kfree()
from after " if (!buf->va) { " to immediately after the fail label so
that both dma_alloc_attrs() and ida_alloc_min() failures are cleaned up
in the same way by the same error path.
That way, if anyone later adds another operation in this loop, it won't
get missed and will also clean up correctly.
Regards
--
Kieran
> }
> --
> 2.25.1
>
Powered by blists - more mailing lists