lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJedcCyr2DEux=bSU+4Ksgz69ouEHebhmcmoCa2ysYB1YiOaNQ@mail.gmail.com>
Date:   Mon, 3 Apr 2023 11:44:24 +0800
From:   Zheng Hacker <hackerzheng666@...il.com>
To:     Ezequiel Garcia <ezequiel@...guardiasur.com.ar>
Cc:     Zheng Wang <zyytlz.wz@....com>, p.zabel@...gutronix.de,
        mchehab@...nel.org, linux-media@...r.kernel.org,
        linux-rockchip@...ts.infradead.org, linux-kernel@...r.kernel.org,
        1395428693sheep@...il.com, alex000young@...il.com,
        hverkuil@...all.nl
Subject: Re: [PATCH v3] media: hantro: fix use after free bug in hantro_remove
 due to race condition

Ezequiel Garcia <ezequiel@...guardiasur.com.ar> 于2023年3月31日周五 10:38写道:
>
> Hi Zheng,
>
> On Mon, Mar 13, 2023 at 12:42 PM Zheng Wang <zyytlz.wz@....com> wrote:
> >
> > In hantro_probe, vpu->watchdog_work is bound with
> > hantro_watchdog. Then hantro_end_prepare_run may
> > be called to start the work.
> >
> > If we close the file or remove the module which will
> > call hantro_release and hantro_remove to make cleanup,
>
> It's not possible to close the file or remove the module while a watchdog is
> scheduled.
>
> That's because the watchdog is active only during a mem2mem job,
> and the file won't be closed until the job is done.
>
> v4l2_m2m_ctx_release calls v4l2_m2m_cancel_jobw
> which waits until the job is done.
>
> If you can confirm it's possible to remove or close the file
> while a job is running, that would be a driver bug.
>
> Thanks for the patch, but it's not needed.
>

Hi Ezequiel,

Thanks for your detailed analysis. Got it :)

Best regards,
Zheng

> Regards,
> Ezequiel
>
> > there may be an unfinished work. The possible sequence
> > is as follows, which will cause a typical UAF bug.
> >
> > The same thing will happen in hantro_release, and use
> > ctx after freeing it.
> >
> > Fix it by canceling the work before cleanup in hantro_release.
> >
> > CPU0                  CPU1
> >
> >                     |hantro_watchdog
> > hantro_remove     |
> >   v4l2_m2m_release  |
> >     kfree(m2m_dev); |
> >                     |
> >                     | v4l2_m2m_get_curr_priv
> >                     |   m2m_dev->curr_ctx //use
> >
> > Signed-off-by: Zheng Wang <zyytlz.wz@....com>
> > ---
> > v3:
> > - use cancel_delayed_work_sync instead of cancel_delayed_work and add it to
> > hantro_release suggested by Hans Verkuil
> >
> > v2:
> > - move the cancel-work-related code to hantro_remove suggested by Hans Verkuil
> > ---
> >  drivers/media/platform/verisilicon/hantro_drv.c | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/media/platform/verisilicon/hantro_drv.c b/drivers/media/platform/verisilicon/hantro_drv.c
> > index b0aeedae7b65..86a4c0fa8c7d 100644
> > --- a/drivers/media/platform/verisilicon/hantro_drv.c
> > +++ b/drivers/media/platform/verisilicon/hantro_drv.c
> > @@ -597,6 +597,7 @@ static int hantro_release(struct file *filp)
> >         struct hantro_ctx *ctx =
> >                 container_of(filp->private_data, struct hantro_ctx, fh);
> >
> > +       cancel_delayed_work_sync(&ctx->dev->watchdog_work);
> >         /*
> >          * No need for extra locking because this was the last reference
> >          * to this file.
> > @@ -1099,6 +1100,7 @@ static int hantro_remove(struct platform_device *pdev)
> >
> >         v4l2_info(&vpu->v4l2_dev, "Removing %s\n", pdev->name);
> >
> > +       cancel_delayed_work_sync(&vpu->watchdog_work);
> >         media_device_unregister(&vpu->mdev);
> >         hantro_remove_dec_func(vpu);
> >         hantro_remove_enc_func(vpu);
> > --
> > 2.25.1
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ