lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7edd7feb-80d2-de8b-44cd-84ee63201ab5@linaro.org>
Date:   Sat, 7 Mar 2020 19:24:53 +0200
From:   Stanimir Varbanov <stanimir.varbanov@...aro.org>
To:     Jeffrey Kardatzke <jkardatzke@...gle.com>
Cc:     linux-media@...r.kernel.org, Andy Gross <agross@...nel.org>,
        Mauro Carvalho Chehab <mchehab@...nel.org>,
        linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] media: venus: fix use after free for registeredbufs

Hi Jeff,

On 3/6/20 10:10 PM, Jeffrey Kardatzke wrote:
> On Fri, Mar 6, 2020 at 1:03 AM Stanimir Varbanov
> <stanimir.varbanov@...aro.org> wrote:
>>
>> Hi Jeff,
>>
>> Thanks for the patch!
>>
>> On 3/6/20 2:23 AM, Jeffrey Kardatzke wrote:
>>> In dynamic bufmode we do not manage the buffers in the registeredbufs
>>> list, so do not add them there when they are initialized. Adding them
>>> there was causing a use after free of the list_head struct in the buffer
>>> when new buffers were allocated after existing buffers were freed.
>>
>> Is this fixing a real issue? How you come to it?
>>
> In our code we were allocating 64x64 capture queue buffers initially,
> then got a resolution change event for the actual video resolution of
> 320x256 so we freed all the existing capture buffers and allocated new
> ones. I had noticed memory poisoning warnings in dmesg and tracked it
> down to the patch I created here. This is only a problem when the
> capture queue has its buffers freed and reallocated (which would
> happen during any resolution change).

Do you call STREAMOFF(CAPTURE) ?

Better, could you share v4l2 debug logs:

echo 0x3f > /sys/class/video4linux/videoX/dev_debug

> 
>>>
>>> Signed-off-by: Jeffrey Kardatzke <jkardatzke@...gle.com>
>>> ---
>>>  drivers/media/platform/qcom/venus/helpers.c | 4 +++-
>>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
>>> index bcc603804041..688a3593b49b 100644
>>> --- a/drivers/media/platform/qcom/venus/helpers.c
>>> +++ b/drivers/media/platform/qcom/venus/helpers.c
>>> @@ -1054,8 +1054,10 @@ int venus_helper_vb2_buf_init(struct vb2_buffer *vb)
>>>       buf->size = vb2_plane_size(vb, 0);
>>>       buf->dma_addr = sg_dma_address(sgt->sgl);
>>>
>>> -     if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
>>> +     if (vb->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE &&
>>> +         !is_dynamic_bufmode(inst)) {
>>
>> If you add !is_dynamic_bufmode here, we will loose the reference frames
>> mechanism (see venus_helper_release_buf_ref()) which is not good.
> 
> In my testing, I never see venus_helper_release_buf_ref called.  I
> think something is wrong with reference frame management. I'm also

The mechanism is valid for Venus v1 and v3, might be you tried on v4
where we have a set of DPB buffers and use them for reference frames.

> seeing failure in my tests that very much look like reference frames
> that were dropped in the decoder (with or without my patch); but they
> are not consistent.
> 
>>
>> Thus, I wonder (depending on when you observe the use-after-free issue)
>> does this is the correct resolution of the problem.
> 
> I agree this is likely not the right solution to the problem, there's
> something deeper that's wrong I think because I never see events
> coming back from hfi with the release buffer reference event.
>>
>>>               list_add_tail(&buf->reg_list, &inst->registeredbufs);
>>> +     }
>>>
>>>       return 0;
>>>  }
>>>
>>
>> --
>> regards,
>> Stan

-- 
regards,
Stan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ