lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52365045-c771-412a-9232-70e80e26c34f@redhat.com>
Date: Tue, 4 Feb 2025 09:50:23 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: John Ousterhout <ouster@...stanford.edu>
Cc: Netdev <netdev@...r.kernel.org>, Eric Dumazet <edumazet@...gle.com>,
 Simon Horman <horms@...nel.org>, Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH net-next v6 08/12] net: homa: create homa_incoming.c

On 2/4/25 12:33 AM, John Ousterhout wrote:
> On Mon, Feb 3, 2025 at 1:12 AM Paolo Abeni <pabeni@...hat.com> wrote:
>> I don't see where/how the SO_HOMA_RCVBUF max value is somehow bounded?!?
>> It looks like the user-space could pick an arbitrary large value for it.
> 
> That's right; is there anything to be gained by limiting it? This is
> simply mmapped memory in the user address space. Aren't applications
> allowed to allocate as much memory as they like? If so, why shouldn't
> they be able to use that memory for incoming buffers if they choose?

If unprivileged applications could use unlimited amount of kernel
memory, they could hurt the whole system stability, possibly causing
functional issue of core kernel due to ENOMEM.

The we always try to bound/put limits on amount of kernel memory
user-space application can use.

>> SO_RCVBUF and SO_SNDBUF are expected to apply to any kind of socket,
>> see man 7 sockets. Exceptions should be at least documented, but we need
>> some way to limit memory usage in both directions.
> 
> The expectations around these limits are based on an unstated (and
> probably unconscious) assumption of a TCP-like streaming protocol.

Actually TCP can use it's own, separated, limits, see:
net.ipv4.tcp_rmem, net.ipv4.tcp_wmem:

https://elixir.bootlin.com/linux/v6.13.1/source/Documentation/networking/ip-sysctl.rst#L719
https://elixir.bootlin.com/linux/v6.13.1/source/Documentation/networking/ip-sysctl.rst#L719

> RPCs are different. For example, there is no one value of rmem_default
> or rmem_max that will work for both TCP and Homa. On my system, these
> values are both around 200 KB, which seems fine for TCP, but that's
> not even enough for a single full-size RPC in Homa, and Homa apps need
> to have several active RPCs at a time. Thus it doesn't make sense to
> use SO_RCVBUF and SO_SNDBUF for both Homa and TCP; their needs are too
> different.

Specific, per protocols limits are allowed, but should be there and
documented.

>> Fine tuning controls and sysctls could land later, but the basic
>> constraints should IMHO be there from the beginning.
> 
> OK. I think that SO_HOMA_RCVBUF takes care of RX buffer space. 

We need some way to allow the admin to bound the SO_HOMA_RCVBUF max value.

> For TX, what's the simplest scheme that you would be comfortable with? For
> example, if I cap the number of outstanding RPCs per socket, will that
> be enough for now?

Usually the bounds are expressed in bytes. How complex would be adding
wmem accounting?

/P


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ