lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51cf8274-162f-384b-0ff7-47fbf15412f1@redhat.com>
Date:   Tue, 22 May 2018 11:50:29 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     DaeRyong Jeong <threeearcat@...il.com>, kvm@...r.kernel.org,
        virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org, byoungyoung@...due.edu,
        kt0755@...il.com, bammanag@...due.edu
Subject: Re: KASAN: use-after-free Read in vhost_chr_write_iter



On 2018年05月21日 22:42, Michael S. Tsirkin wrote:
> On Mon, May 21, 2018 at 10:38:10AM +0800, Jason Wang wrote:
>> On 2018年05月18日 17:24, Jason Wang wrote:
>>> On 2018年05月17日 21:45, DaeRyong Jeong wrote:
>>>> We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
>>>>
>>>> This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
>>>> version of Syzkaller), which we describe more at the end of this
>>>> report. Our analysis shows that the race occurs when invoking two
>>>> syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
>>>>
>>>>
>>>> Analysis:
>>>> We think the concurrent execution of vhost_process_iotlb_msg() and
>>>> vhost_dev_cleanup() causes the crash.
>>>> Both of functions can run concurrently (please see call sequence below),
>>>> and possibly, there is a race on dev->iotlb.
>>>> If the switch occurs right after vhost_dev_cleanup() frees
>>>> dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value
>>>> and it
>>>> keep executing without returning -EFAULT. Consequently, use-after-free
>>>> occures
>>>>
>>>>
>>>> Thread interleaving:
>>>> CPU0 (vhost_process_iotlb_msg)                CPU1 (vhost_dev_cleanup)
>>>> (In the case of both VHOST_IOTLB_UPDATE and
>>>> VHOST_IOTLB_INVALIDATE)
>>>> =====                            =====
>>>>                              vhost_umem_clean(dev->iotlb);
>>>> if (!dev->iotlb) {
>>>>              ret = -EFAULT;
>>>>                  break;
>>>> }
>>>>                              dev->iotlb = NULL;
>>>>
>>>>
>>>> Call Sequence:
>>>> CPU0
>>>> =====
>>>> vhost_net_chr_write_iter
>>>>      vhost_chr_write_iter
>>>>          vhost_process_iotlb_msg
>>>>
>>>> CPU1
>>>> =====
>>>> vhost_net_ioctl
>>>>      vhost_net_reset_owner
>>>>          vhost_dev_reset_owner
>>>>              vhost_dev_cleanup
>>> Thanks a lot for the analysis.
>>>
>>> This could be addressed by simply protect it with dev mutex.
>>>
>>> Will post a patch.
>>>
>> Could you please help to test the attached patch? I've done some smoking
>> test.
>>
>> Thanks
>> >From 88328386f3f652e684ee33dc4cf63dcaed871aea Mon Sep 17 00:00:00 2001
>> From: Jason Wang<jasowang@...hat.com>
>> Date: Fri, 18 May 2018 17:33:27 +0800
>> Subject: [PATCH] vhost: synchronize IOTLB message with dev cleanup
>>
>> DaeRyong Jeong reports a race between vhost_dev_cleanup() and
>> vhost_process_iotlb_msg():
>>
>> Thread interleaving:
>> CPU0 (vhost_process_iotlb_msg)			CPU1 (vhost_dev_cleanup)
>> (In the case of both VHOST_IOTLB_UPDATE and
>> VHOST_IOTLB_INVALIDATE)
>> =====						=====
>> 						vhost_umem_clean(dev->iotlb);
>> if (!dev->iotlb) {
>> 	        ret = -EFAULT;
>> 		        break;
>> }
>> 						dev->iotlb = NULL;
>>
>> The reason is we don't synchronize between them, fixing by protecting
>> vhost_process_iotlb_msg() with dev mutex.
>>
>> Reported-by: DaeRyong Jeong<threeearcat@...il.com>
>> Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API")
>> Reported-by: DaeRyong Jeong<threeearcat@...il.com>
> Long terms we might want to move iotlb into vqs
> so that messages can be processed in parallel.
> Not sure how to do it yet.
>

Then we probably need to extend IOTLB msg to have a queue idx. But I 
thinkit was probably only help if we split tx/rx into separate processes.

Thanks


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ