lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20171126.042221.596692527873080042.davem@davemloft.net>
Date:   Sun, 26 Nov 2017 04:22:21 +0900 (KST)
From:   David Miller <davem@...emloft.net>
To:     jhansen@...are.com
Cc:     netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        virtualization@...ts.linux-foundation.org,
        gregkh@...uxfoundation.org, pv-drivers@...are.com
Subject: Re: [PATCH v2] VSOCK: Don't call vsock_stream_has_data in atomic
 context

From: Jorgen Hansen <jhansen@...are.com>
Date: Fri, 24 Nov 2017 06:25:28 -0800

> When using the host personality, VMCI will grab a mutex for any
> queue pair access. In the detach callback for the vmci vsock
> transport, we call vsock_stream_has_data while holding a spinlock,
> and vsock_stream_has_data will access a queue pair.
> 
> To avoid this, we can simply omit calling vsock_stream_has_data
> for host side queue pairs, since the QPs are empty per default
> when the guest has detached.
> 
> This bug affects users of VMware Workstation using kernel version
> 4.4 and later.
> 
> Testing: Ran vsock tests between guest and host, and verified that
> with this change, the host isn't calling vsock_stream_has_data
> during detach. Ran mixedTest between guest and host using both
> guest and host as server.
> 
> v2: Rebased on top of recent change to sk_state values
> Reviewed-by: Adit Ranadive <aditr@...are.com>
> Reviewed-by: Aditya Sarwade <asarwade@...are.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@...hat.com>
> Signed-off-by: Jorgen Hansen <jhansen@...are.com>

Applied, thank you.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ