lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Fri, 19 Aug 2022 12:55:37 +0200
From:   Greg KH <gregkh@...uxfoundation.org>
To:     Abhishek Shah <abhishek.shah@...umbia.edu>
Cc:     arnd@...db.de, bryantan@...are.com, linux-kernel@...r.kernel.org,
        rjalisatgi@...are.com, vdasa@...are.com,
        Gabriel Ryan <gabe@...columbia.edu>, pv-drivers@...are.com
Subject: Re: data-race in vmci_ctx_dequeue_datagram /
 vmci_ctx_rcv_notifications_release

On Fri, Aug 19, 2022 at 06:33:00AM -0400, Abhishek Shah wrote:
> Hi all,
> 
> We found the following race involving the *context->notify* variable. We
> were unable to find any security implications of the race, but we would
> still like to report it. Please let us know what you think.
> 
> Thanks!
> 
> 
> *-----------------Report--------------*
> 
> *write* to 0xffffffff8832e400 of 1 bytes by task 6542 on cpu 0:
>  ctx_clear_notify drivers/misc/vmw_vmci/vmci_context.c:51 [inline]
>  ctx_clear_notify_call drivers/misc/vmw_vmci/vmci_context.c:62 [inline]
>  vmci_ctx_rcv_notifications_release+0x26a/0x280
> drivers/misc/vmw_vmci/vmci_context.c:926
>  vmci_host_do_recv_notifications drivers/misc/vmw_vmci/vmci_host.c:900
> [inline]
>  vmci_host_unlocked_ioctl+0x17cf/0x1800
> drivers/misc/vmw_vmci/vmci_host.c:949
>  vfs_ioctl fs/ioctl.c:51 [inline]
>  __do_sys_ioctl fs/ioctl.c:870 [inline]
>  __se_sys_ioctl+0xe1/0x150 fs/ioctl.c:856
>  __x64_sys_ioctl+0x43/0x50 fs/ioctl.c:856
>  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
>  do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
>  entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> *write* to 0xffffffff8832e400 of 1 bytes by task 6541 on cpu 1:
>  ctx_clear_notify drivers/misc/vmw_vmci/vmci_context.c:51 [inline]
>  ctx_clear_notify_call drivers/misc/vmw_vmci/vmci_context.c:62 [inline]
>  vmci_ctx_dequeue_datagram+0x1fc/0x2c0
> drivers/misc/vmw_vmci/vmci_context.c:519
>  vmci_host_do_receive_datagram drivers/misc/vmw_vmci/vmci_host.c:426
> [inline]
>  vmci_host_unlocked_ioctl+0x91a/0x1800 drivers/misc/vmw_vmci/vmci_host.c:925
>  vfs_ioctl fs/ioctl.c:51 [inline]
>  __do_sys_ioctl fs/ioctl.c:870 [inline]
>  __se_sys_ioctl+0xe1/0x150 fs/ioctl.c:856
>  __x64_sys_ioctl+0x43/0x50 fs/ioctl.c:856
>  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
>  do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
>  entry_SYSCALL_64_after_hwframe+0x44/0xae
> 
> Reported by Kernel Concurrency Sanitizer on:
> CPU: 1 PID: 6541 Comm: syz-executor2-n Not tainted 5.18.0-rc5+ #107
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1
> 04/01/2014
> 
> Input CPU 0:
> r0 = openat$vmci(0xffffff9c, &(0x7f0000001440)='/dev/vmci\x00', 0x2, 0x0)
> ioctl$IOCTL_VMCI_VERSION2(r0, 0x7a7, &(0x7f0000000000)=0xb0000)
> ioctl$IOCTL_VMCI_INIT_CONTEXT(r0, 0x7a0, &(0x7f0000000040)={@...0x1})
> ioctl$IOCTL_VMCI_NOTIFICATIONS_RECEIVE(r0, 0x7a6, &(0x7f0000000080)={0x0,
> 0x0, 0x101, 0x5})
> 
> Input CPU 1:
> r0 = openat$vmci(0xffffff9c, &(0x7f0000001440)='/dev/vmci\x00', 0x2, 0x0)
> ioctl$IOCTL_VMCI_VERSION2(r0, 0x7a7, &(0x7f0000000000)=0xb0000)
> ioctl$IOCTL_VMCI_INIT_CONTEXT(r0, 0x7a0, &(0x7f0000000040)={@...0x1})
> ioctl$IOCTL_VMCI_DATAGRAM_RECEIVE(r0, 0x7ac, &(0x7f00000004c0)={0x0})


If multiple userspace programs open this, then yes, there will be
oddities, but that shouldn't be an issue, right?

Do you have a proposed patch for this to show what you think should be
done?

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ