lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58419d62-3074-2e5a-8504-da1cdeb08280@redhat.com>
Date:   Fri, 18 May 2018 17:24:48 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     DaeRyong Jeong <threeearcat@...il.com>, mst@...hat.com
Cc:     kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        byoungyoung@...due.edu, kt0755@...il.com, bammanag@...due.edu
Subject: Re: KASAN: use-after-free Read in vhost_chr_write_iter



On 2018年05月17日 21:45, DaeRyong Jeong wrote:
> We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
>
> This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
> version of Syzkaller), which we describe more at the end of this
> report. Our analysis shows that the race occurs when invoking two
> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
>
>
> Analysis:
> We think the concurrent execution of vhost_process_iotlb_msg() and
> vhost_dev_cleanup() causes the crash.
> Both of functions can run concurrently (please see call sequence below),
> and possibly, there is a race on dev->iotlb.
> If the switch occurs right after vhost_dev_cleanup() frees
> dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value and it
> keep executing without returning -EFAULT. Consequently, use-after-free
> occures
>
>
> Thread interleaving:
> CPU0 (vhost_process_iotlb_msg)				CPU1 (vhost_dev_cleanup)
> (In the case of both VHOST_IOTLB_UPDATE and
> VHOST_IOTLB_INVALIDATE)
> =====							=====
> 							vhost_umem_clean(dev->iotlb);
> if (!dev->iotlb) {
> 	        ret = -EFAULT;
> 		        break;
> }
> 							dev->iotlb = NULL;
>
>
> Call Sequence:
> CPU0
> =====
> vhost_net_chr_write_iter
> 	vhost_chr_write_iter
> 		vhost_process_iotlb_msg
>
> CPU1
> =====
> vhost_net_ioctl
> 	vhost_net_reset_owner
> 		vhost_dev_reset_owner
> 			vhost_dev_cleanup

Thanks a lot for the analysis.

This could be addressed by simply protect it with dev mutex.

Will post a patch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ