[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180521173645-mutt-send-email-mst@kernel.org>
Date: Mon, 21 May 2018 17:42:16 +0300
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Jason Wang <jasowang@...hat.com>
Cc: DaeRyong Jeong <threeearcat@...il.com>, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, byoungyoung@...due.edu,
kt0755@...il.com, bammanag@...due.edu
Subject: Re: KASAN: use-after-free Read in vhost_chr_write_iter
On Mon, May 21, 2018 at 10:38:10AM +0800, Jason Wang wrote:
>
>
> On 2018年05月18日 17:24, Jason Wang wrote:
> >
> >
> > On 2018年05月17日 21:45, DaeRyong Jeong wrote:
> > > We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter
> > >
> > > This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
> > > version of Syzkaller), which we describe more at the end of this
> > > report. Our analysis shows that the race occurs when invoking two
> > > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER.
> > >
> > >
> > > Analysis:
> > > We think the concurrent execution of vhost_process_iotlb_msg() and
> > > vhost_dev_cleanup() causes the crash.
> > > Both of functions can run concurrently (please see call sequence below),
> > > and possibly, there is a race on dev->iotlb.
> > > If the switch occurs right after vhost_dev_cleanup() frees
> > > dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value
> > > and it
> > > keep executing without returning -EFAULT. Consequently, use-after-free
> > > occures
> > >
> > >
> > > Thread interleaving:
> > > CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
> > > (In the case of both VHOST_IOTLB_UPDATE and
> > > VHOST_IOTLB_INVALIDATE)
> > > ===== =====
> > > vhost_umem_clean(dev->iotlb);
> > > if (!dev->iotlb) {
> > > ret = -EFAULT;
> > > break;
> > > }
> > > dev->iotlb = NULL;
> > >
> > >
> > > Call Sequence:
> > > CPU0
> > > =====
> > > vhost_net_chr_write_iter
> > > vhost_chr_write_iter
> > > vhost_process_iotlb_msg
> > >
> > > CPU1
> > > =====
> > > vhost_net_ioctl
> > > vhost_net_reset_owner
> > > vhost_dev_reset_owner
> > > vhost_dev_cleanup
> >
> > Thanks a lot for the analysis.
> >
> > This could be addressed by simply protect it with dev mutex.
> >
> > Will post a patch.
> >
>
> Could you please help to test the attached patch? I've done some smoking
> test.
>
> Thanks
> >From 88328386f3f652e684ee33dc4cf63dcaed871aea Mon Sep 17 00:00:00 2001
> From: Jason Wang <jasowang@...hat.com>
> Date: Fri, 18 May 2018 17:33:27 +0800
> Subject: [PATCH] vhost: synchronize IOTLB message with dev cleanup
>
> DaeRyong Jeong reports a race between vhost_dev_cleanup() and
> vhost_process_iotlb_msg():
>
> Thread interleaving:
> CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup)
> (In the case of both VHOST_IOTLB_UPDATE and
> VHOST_IOTLB_INVALIDATE)
> ===== =====
> vhost_umem_clean(dev->iotlb);
> if (!dev->iotlb) {
> ret = -EFAULT;
> break;
> }
> dev->iotlb = NULL;
>
> The reason is we don't synchronize between them, fixing by protecting
> vhost_process_iotlb_msg() with dev mutex.
>
> Reported-by: DaeRyong Jeong <threeearcat@...il.com>
> Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API")
> Reported-by: DaeRyong Jeong <threeearcat@...il.com>
Long terms we might want to move iotlb into vqs
so that messages can be processed in parallel.
Not sure how to do it yet.
> ---
> drivers/vhost/vhost.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index f3bd8e9..f0be5f3 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -981,6 +981,7 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
> {
> int ret = 0;
>
> + mutex_lock(&dev->mutex);
> vhost_dev_lock_vqs(dev);
> switch (msg->type) {
> case VHOST_IOTLB_UPDATE:
> @@ -1016,6 +1017,8 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
> }
>
> vhost_dev_unlock_vqs(dev);
> + mutex_unlock(&dev->mutex);
> +
> return ret;
> }
> ssize_t vhost_chr_write_iter(struct vhost_dev *dev,
> --
> 2.7.4
>
Powered by blists - more mailing lists