[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEvd7ETC_ANyrOSAVz_i64xqpYYazmm=+39E51=DMRFXdw@mail.gmail.com>
Date: Tue, 22 Feb 2022 15:11:07 +0800
From: Jason Wang <jasowang@...hat.com>
To: Anirudh Rayabharam <mail@...rudhrb.com>
Cc: "Michael S. Tsirkin" <mst@...hat.com>,
syzbot+0abd373e2e50d704db87@...kaller.appspotmail.com,
kvm <kvm@...r.kernel.org>,
virtualization <virtualization@...ts.linux-foundation.org>,
netdev <netdev@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] vhost: validate range size before adding to iotlb
On Tue, Feb 22, 2022 at 12:57 PM Anirudh Rayabharam <mail@...rudhrb.com> wrote:
>
> On Tue, Feb 22, 2022 at 10:50:20AM +0800, Jason Wang wrote:
> > On Tue, Feb 22, 2022 at 3:53 AM Anirudh Rayabharam <mail@...rudhrb.com> wrote:
> > >
> > > In vhost_iotlb_add_range_ctx(), validate the range size is non-zero
> > > before proceeding with adding it to the iotlb.
> > >
> > > Range size can overflow to 0 when start is 0 and last is (2^64 - 1).
> > > One instance where it can happen is when userspace sends an IOTLB
> > > message with iova=size=uaddr=0 (vhost_process_iotlb_msg). So, an
> > > entry with size = 0, start = 0, last = (2^64 - 1) ends up in the
> > > iotlb. Next time a packet is sent, iotlb_access_ok() loops
> > > indefinitely due to that erroneous entry:
> > >
> > > Call Trace:
> > > <TASK>
> > > iotlb_access_ok+0x21b/0x3e0 drivers/vhost/vhost.c:1340
> > > vq_meta_prefetch+0xbc/0x280 drivers/vhost/vhost.c:1366
> > > vhost_transport_do_send_pkt+0xe0/0xfd0 drivers/vhost/vsock.c:104
> > > vhost_worker+0x23d/0x3d0 drivers/vhost/vhost.c:372
> > > kthread+0x2e9/0x3a0 kernel/kthread.c:377
> > > ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
> > > </TASK>
> > >
> > > Reported by syzbot at:
> > > https://syzkaller.appspot.com/bug?extid=0abd373e2e50d704db87
> > >
> > > Reported-by: syzbot+0abd373e2e50d704db87@...kaller.appspotmail.com
> > > Tested-by: syzbot+0abd373e2e50d704db87@...kaller.appspotmail.com
> > > Signed-off-by: Anirudh Rayabharam <mail@...rudhrb.com>
> > > ---
> > > drivers/vhost/iotlb.c | 6 ++++--
> > > 1 file changed, 4 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/vhost/iotlb.c b/drivers/vhost/iotlb.c
> > > index 670d56c879e5..b9de74bd2f9c 100644
> > > --- a/drivers/vhost/iotlb.c
> > > +++ b/drivers/vhost/iotlb.c
> > > @@ -53,8 +53,10 @@ int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb,
> > > void *opaque)
> > > {
> > > struct vhost_iotlb_map *map;
> > > + u64 size = last - start + 1;
> > >
> > > - if (last < start)
> > > + // size can overflow to 0 when start is 0 and last is (2^64 - 1).
> > > + if (last < start || size == 0)
> > > return -EFAULT;
> >
> > I'd move this check to vhost_chr_iter_write(), then for the device who
> > has its own msg handler (e.g vDPA) can benefit from it as well.
>
> Thanks for reviewing!
>
> I kept the check here thinking that all devices would benefit from it
> because they would need to call vhost_iotlb_add_range() to add an entry
> to the iotlb. Isn't that correct?
Correct for now but not for the future, it's not guaranteed that the
per device iotlb message handler will use vhost iotlb.
But I agree that we probably don't need to care about it too much now.
> Do you see any other benefit in moving
> it to vhost_chr_iter_write()?
>
> One concern I have is that if we move it out some future caller to
> vhost_iotlb_add_range() might forget to handle this case.
Yes.
Rethink the whole fix, we're basically rejecting [0, ULONG_MAX] range
which seems a little bit odd. I wonder if it's better to just remove
the map->size. Having a quick glance at the the user, I don't see any
blocker for this.
Thanks
>
> Thanks!
>
> - Anirudh.
>
> >
> > Thanks
> >
> > >
> > > if (iotlb->limit &&
> > > @@ -69,7 +71,7 @@ int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb,
> > > return -ENOMEM;
> > >
> > > map->start = start;
> > > - map->size = last - start + 1;
> > > + map->size = size;
> > > map->last = last;
> > > map->addr = addr;
> > > map->perm = perm;
> > > --
> > > 2.35.1
> > >
> >
>
Powered by blists - more mailing lists