lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241204063158.GG886051@google.com>
Date: Wed, 4 Dec 2024 15:31:58 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Stanimir Varbanov <stanimir.k.varbanov@...il.com>,
	Bryan O'Donoghue <bryan.odonoghue@...aro.org>
Cc: Vikash Garodia <quic_vgarodia@...cinc.com>,
	Dikshita Agarwal <quic_dikshita@...cinc.com>,
	linux-media@...r.kernel.org, linux-kernel@...r.kernel.org,
	Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCHv6 2/3] media: venus: sync with threaded IRQ during inst
 destruction

On (24/10/26 01:56), Sergey Senozhatsky wrote:
> BUG: KASAN: slab-use-after-free in vb2_queue_error+0x80/0x90
> Call trace:
> dump_backtrace+0x1c4/0x1f8
> show_stack+0x38/0x60
> dump_stack_lvl+0x168/0x1f0
> print_report+0x170/0x4c8
> kasan_report+0x94/0xd0
> __asan_report_load2_noabort+0x20/0x30
> vb2_queue_error+0x80/0x90
> venus_helper_vb2_queue_error+0x54/0x78
> venc_event_notify+0xec/0x158
> hfi_event_notify+0x878/0xd20
> hfi_process_msg_packet+0x27c/0x4e0
> venus_isr_thread+0x258/0x6e8
> hfi_isr_thread+0x70/0x90
> venus_isr_thread+0x34/0x50
> irq_thread_fn+0x88/0x130
> irq_thread+0x160/0x2c0
> kthread+0x294/0x328
> ret_from_fork+0x10/0x20
> 
> Allocated by task 20291:
> kasan_set_track+0x4c/0x80
> kasan_save_alloc_info+0x28/0x38
> __kasan_kmalloc+0x84/0xa0
> kmalloc_trace+0x7c/0x98
> v4l2_m2m_ctx_init+0x74/0x280
> venc_open+0x444/0x6d0
> v4l2_open+0x19c/0x2a0
> chrdev_open+0x374/0x3f0
> do_dentry_open+0x710/0x10a8
> vfs_open+0x88/0xa8
> path_openat+0x1e6c/0x2700
> do_filp_open+0x1a4/0x2e0
> do_sys_openat2+0xe8/0x508
> do_sys_open+0x15c/0x1a0
> __arm64_sys_openat+0xa8/0xc8
> invoke_syscall+0xdc/0x270
> el0_svc_common+0x1ec/0x250
> do_el0_svc+0x54/0x70
> el0_svc+0x50/0xe8
> el0t_64_sync_handler+0x48/0x120
> el0t_64_sync+0x1a8/0x1b0
> 
> Freed by task 20291:
>  kasan_set_track+0x4c/0x80
>  kasan_save_free_info+0x3c/0x60
>  ____kasan_slab_free+0x124/0x1a0
>  __kasan_slab_free+0x18/0x28
>  __kmem_cache_free+0x134/0x300
>  kfree+0xc8/0x1a8
>  v4l2_m2m_ctx_release+0x44/0x60
>  venc_close+0x78/0x130 [venus_enc]
>  v4l2_release+0x20c/0x2f8
>  __fput+0x328/0x7f0
>  ____fput+0x2c/0x48
>  task_work_run+0x1e0/0x280
>  get_signal+0xfb8/0x1190
>  do_notify_resume+0x34c/0x16a8
>  el0_svc+0x9c/0xe8
>  el0t_64_sync_handler+0x48/0x120
>  el0t_64_sync+0x1a8/0x1b0

[..]

> @@ -1750,10 +1750,20 @@ static int vdec_close(struct file *file)
>  	vdec_pm_get(inst);
>  
>  	cancel_work_sync(&inst->delayed_process_work);
> +	/*
> +	 * First, remove the inst from the ->instances list, so that
> +	 * to_instance() will return NULL.
> +	 */
> +	hfi_session_destroy(inst);
> +	/*
> +	 * Second, make sure we don't have IRQ/IRQ-thread currently running
> +	 * or pending execution, which would race with the inst destruction.
> +	 */
> +	synchronize_irq(inst->core->irq);
> +
>  	v4l2_m2m_ctx_release(inst->m2m_ctx);
>  	v4l2_m2m_release(inst->m2m_dev);
>  	ida_destroy(&inst->dpb_ids);
> -	hfi_session_destroy(inst);
>  	v4l2_fh_del(&inst->fh);
>  	v4l2_fh_exit(&inst->fh);
>  	vdec_ctrl_deinit(inst);

Sorry about the news, I got a regression report this morning and the reporter
points at this patch as the culprit.  It sees that under some circumstances
at close() there are still multiple pending requests, so hfi_session_destroy()
performed as the first step (in order to close race condition with
instance destruction) makes it impossible to finish some of those pending
requests ("no valid instance", which was the whole point).

v4l2_m2m_ctx_release() expects to be called before hfi_session_destroy(),
but this leaves us with half destroyed instance on the instances list.
Not sure what to do about it.  Would it be safe (?) to call synchronize_irq()
at the very start and then keep the old destruction order (which looks a
little unsafe) like something below *.  Or is there something missing in the
driver and there is a way to make sure we don't have any pending
jobs/requests at close()?


* something below
---
        /*
         * Make sure we don't have IRQ/IRQ-thread currently running
         * or pending execution, which would race with the inst destruction.
         */
        synchronize_irq(inst->core->irq);
        /*
         * Now wait for jobs to be dequeued. Note this will free m2m ctx.
         */
        v4l2_m2m_ctx_release(inst->m2m_ctx);
        v4l2_m2m_release(inst->m2m_dev);
        hfi_session_destroy(inst);
        v4l2_fh_del(&inst->fh);
        v4l2_fh_exit(&inst->fh);
        v4l2_ctrl_handler_free(&inst->ctrl_handler);

        mutex_destroy(&inst->lock);
        mutex_destroy(&inst->ctx_q_lock);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ