lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGxU2F7HK5KRggiY7xnKHeXFRXJmqcKbjf3JnXC3mbmn9xqRtw@mail.gmail.com>
Date:   Tue, 30 May 2023 18:00:47 +0200
From:   Stefano Garzarella <sgarzare@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>,
        Mike Christie <michael.christie@...cle.com>
Cc:     syzbot <syzbot+d0d442c22fa8db45ff0e@...kaller.appspotmail.com>,
        jasowang@...hat.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
        syzkaller-bugs@...glegroups.com,
        virtualization@...ts.linux-foundation.org, stefanha@...hat.com
Subject: Re: [syzbot] [kvm?] [net?] [virt?] general protection fault in
 vhost_work_queue

On Tue, May 30, 2023 at 3:44 PM Stefano Garzarella <sgarzare@...hat.com> wrote:
>
> On Tue, May 30, 2023 at 1:24 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> >
> > On Tue, May 30, 2023 at 12:30:06AM -0700, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit:    933174ae28ba Merge tag 'spi-fix-v6.4-rc3' of git://git.ker..
> > > git tree:       upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=138d4ae5280000
> > > kernel config:  https://syzkaller.appspot.com/x/.config?x=f389ffdf4e9ba3f0
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=d0d442c22fa8db45ff0e
> > > compiler:       gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> > >
> > > Unfortunately, I don't have any reproducer for this issue yet.
> > >
> > > Downloadable assets:
> > > disk image: https://storage.googleapis.com/syzbot-assets/21a81b8c2660/disk-933174ae.raw.xz
> > > vmlinux: https://storage.googleapis.com/syzbot-assets/b4951d89e238/vmlinux-933174ae.xz
> > > kernel image: https://storage.googleapis.com/syzbot-assets/21eb405303cc/bzImage-933174ae.xz
> > >
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > Reported-by: syzbot+d0d442c22fa8db45ff0e@...kaller.appspotmail.com
> > >
> > > general protection fault, probably for non-canonical address 0xdffffc000000000e: 0000 [#1] PREEMPT SMP KASAN
> > > KASAN: null-ptr-deref in range [0x0000000000000070-0x0000000000000077]
> > > CPU: 0 PID: 29845 Comm: syz-executor.4 Not tainted 6.4.0-rc3-syzkaller-00032-g933174ae28ba #0
> > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/16/2023
> > > RIP: 0010:vhost_work_queue drivers/vhost/vhost.c:259 [inline]
> > > RIP: 0010:vhost_work_queue+0xfc/0x150 drivers/vhost/vhost.c:248
> > > Code: 00 00 fc ff df 48 89 da 48 c1 ea 03 80 3c 02 00 75 56 48 b8 00 00 00 00 00 fc ff df 48 8b 1b 48 8d 7b 70 48 89 fa 48 c1 ea 03 <80> 3c 02 00 75 42 48 8b 7b 70 e8 95 9e ae f9 5b 5d 41 5c 41 5d e9
> > > RSP: 0018:ffffc9000333faf8 EFLAGS: 00010202
> > > RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffc9000d84d000
> > > RDX: 000000000000000e RSI: ffffffff841221d7 RDI: 0000000000000070
> > > RBP: ffff88804b6b95b0 R08: 0000000000000001 R09: 0000000000000000
> > > R10: 0000000000000001 R11: 0000000000000000 R12: ffff88804b6b00b0
> > > R13: 0000000000000000 R14: ffff88804b6b95e0 R15: ffff88804b6b95c8
> > > FS:  00007f3b445ec700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
> > > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > CR2: 0000001b2e423000 CR3: 000000005d734000 CR4: 00000000003506f0
> > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > DR3: 000000000000003b DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > Call Trace:
> > >  <TASK>
> > >  vhost_transport_send_pkt+0x268/0x520 drivers/vhost/vsock.c:288
> > >  virtio_transport_send_pkt_info+0x54c/0x820 net/vmw_vsock/virtio_transport_common.c:250
> > >  virtio_transport_connect+0xb1/0xf0 net/vmw_vsock/virtio_transport_common.c:813
> > >  vsock_connect+0x37f/0xcd0 net/vmw_vsock/af_vsock.c:1414
> > >  __sys_connect_file+0x153/0x1a0 net/socket.c:2003
> > >  __sys_connect+0x165/0x1a0 net/socket.c:2020
> > >  __do_sys_connect net/socket.c:2030 [inline]
> > >  __se_sys_connect net/socket.c:2027 [inline]
> > >  __x64_sys_connect+0x73/0xb0 net/socket.c:2027
> > >  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > >  do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
> > >  entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > > RIP: 0033:0x7f3b4388c169
> > > Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> > > RSP: 002b:00007f3b445ec168 EFLAGS: 00000246 ORIG_RAX: 000000000000002a
> > > RAX: ffffffffffffffda RBX: 00007f3b439ac050 RCX: 00007f3b4388c169
> > > RDX: 0000000000000010 RSI: 0000000020000140 RDI: 0000000000000004
> > > RBP: 00007f3b438e7ca1 R08: 0000000000000000 R09: 0000000000000000
> > > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> > > R13: 00007f3b43acfb1f R14: 00007f3b445ec300 R15: 0000000000022000
> > >  </TASK>
> > > Modules linked in:
> > > ---[ end trace 0000000000000000 ]---
> > > RIP: 0010:vhost_work_queue drivers/vhost/vhost.c:259 [inline]
> > > RIP: 0010:vhost_work_queue+0xfc/0x150 drivers/vhost/vhost.c:248
> > > Code: 00 00 fc ff df 48 89 da 48 c1 ea 03 80 3c 02 00 75 56 48 b8 00 00 00 00 00 fc ff df 48 8b 1b 48 8d 7b 70 48 89 fa 48 c1 ea 03 <80> 3c 02 00 75 42 48 8b 7b 70 e8 95 9e ae f9 5b 5d 41 5c 41 5d e9
> > > RSP: 0018:ffffc9000333faf8 EFLAGS: 00010202
> > > RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffc9000d84d000
> > > RDX: 000000000000000e RSI: ffffffff841221d7 RDI: 0000000000000070
> > > RBP: ffff88804b6b95b0 R08: 0000000000000001 R09: 0000000000000000
> > > R10: 0000000000000001 R11: 0000000000000000 R12: ffff88804b6b00b0
> > > R13: 0000000000000000 R14: ffff88804b6b95e0 R15: ffff88804b6b95c8
> > > FS:  00007f3b445ec700(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
> > > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > CR2: 0000001b2e428000 CR3: 000000005d734000 CR4: 00000000003506e0
> > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > DR3: 000000000000003b DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > ----------------
> > > Code disassembly (best guess), 5 bytes skipped:
> > >    0: 48 89 da                mov    %rbx,%rdx
> > >    3: 48 c1 ea 03             shr    $0x3,%rdx
> > >    7: 80 3c 02 00             cmpb   $0x0,(%rdx,%rax,1)
> > >    b: 75 56                   jne    0x63
> > >    d: 48 b8 00 00 00 00 00    movabs $0xdffffc0000000000,%rax
> > >   14: fc ff df
> > >   17: 48 8b 1b                mov    (%rbx),%rbx
> > >   1a: 48 8d 7b 70             lea    0x70(%rbx),%rdi
> > >   1e: 48 89 fa                mov    %rdi,%rdx
> > >   21: 48 c1 ea 03             shr    $0x3,%rdx
> > > * 25: 80 3c 02 00             cmpb   $0x0,(%rdx,%rax,1) <-- trapping instruction
> > >   29: 75 42                   jne    0x6d
> > >   2b: 48 8b 7b 70             mov    0x70(%rbx),%rdi
> > >   2f: e8 95 9e ae f9          callq  0xf9ae9ec9
> > >   34: 5b                      pop    %rbx
> > >   35: 5d                      pop    %rbp
> > >   36: 41 5c                   pop    %r12
> > >   38: 41 5d                   pop    %r13
> > >   3a: e9                      .byte 0xe9
> >
> >
> > Stefano, Stefan, take a look?
>
> I'll take a look.
>
> From a first glance, it looks like an issue when we call vhost_work_queue().
> @Mike, does that ring any bells since you recently looked at that code?

I think it is partially related to commit 6e890c5d5021 ("vhost: use
vhost_tasks for worker threads") and commit 1a5f8090c6de ("vhost: move
worker thread fields to new struct"). Maybe that commits just
highlighted the issue and it was already existing.

In this case I think there is a race between vhost_worker_create() and
vhost_transport_send_pkt(). vhost_transport_send_pkt() calls
vhost_work_queue() without holding the vhost device mutex, so it can run
while vhost_worker_create() set dev->worker, but has not yet set
worker->vtsk.

Before commit 1a5f8090c6de ("vhost: move worker thread fields to new
struct"), dev->worker is set when everything was ready, but maybe it was
just a case of the instructions not being re-ordered and the problem
could still occur.

This happens because VHOST_VSOCK_SET_GUEST_CID can be called before
VHOST_SET_OWNER and then vhost_transport_send_pkt() finds the guest's
CID and tries to send it a packet.
But is it correct to handle VHOST_VSOCK_SET_GUEST_CID, before
VHOST_SET_OWNER?

QEMU always calls VHOST_SET_OWNER before anything, but I don't know
about the other VMMs.

So, could it be an acceptable solution to reject
VHOST_VSOCK_SET_GUEST_CID before VHOST_SET_OWNER?

I mean somethig like this:

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 6578db78f0ae..33fc0805d189 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -829,7 +829,12 @@ static long vhost_vsock_dev_ioctl(struct file *f, unsigned int ioctl,
        case VHOST_VSOCK_SET_GUEST_CID:
                if (copy_from_user(&guest_cid, argp, sizeof(guest_cid)))
                        return -EFAULT;
-               return vhost_vsock_set_cid(vsock, guest_cid);
+               mutex_lock(&vsock->dev.mutex);
+               r = vhost_dev_check_owner(&vsock->dev);
+               if (!r)
+                       r = vhost_vsock_set_cid(vsock, guest_cid);
+               mutex_unlock(&vsock->dev.mutex);
+               return r;
        case VHOST_VSOCK_SET_RUNNING:
                if (copy_from_user(&start, argp, sizeof(start)))
                        return -EFAULT;

In the documentation, we say:

  /* Set current process as the (exclusive) owner of this file descriptor.  This
   * must be called before any other vhost command.  Further calls to
   * VHOST_OWNER_SET fail until VHOST_OWNER_RESET is called. */

This should prevents the issue, but could break a wrong userspace.

Others idea that I have in mind are:
- hold vsock->dev.mutex while calling vhost_work_queue() (performance 
  degradation?)
- use RCU to protect dev->worker

WDYT?

Thanks,
Stefano

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ