lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250107154647.4bcbae3c@kernel.org>
Date: Tue, 7 Jan 2025 15:46:47 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Aaron Tomlin <atomlin@...mlin.com>
Cc: Florian Fainelli <florian.fainelli@...adcom.com>,
 ronak.doshi@...adcom.com, andrew+netdev@...n.ch, davem@...emloft.net,
 edumazet@...gle.com, pabeni@...hat.com,
 bcm-kernel-feedback-list@...adcom.com, netdev@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/1] vmxnet3: Adjust maximum Rx ring buffer size

On Tue, 7 Jan 2025 22:55:38 +0000 (GMT) Aaron Tomlin wrote:
> On Tue, 7 Jan 2025, Jakub Kicinski wrote:
> > True, although TBH I don't fully understand why this flag exists
> > in the first place. Is it just supposed to be catching programming
> > errors, or is it due to potential DoS implications of users triggering
> > large allocations?  
> 
> Jakub,
> 
> I suspect that introducing __GFP_NOWARN would mask the issue, no?
> I think the warning was useful. Otherwise it would be rather difficult to
> establish precisely why the Rx Data ring was disable. In this particular
> case, if I understand correctly, the intended size of the Rx Data ring was
> simply too large due to the size of the maximum supported Rx Data buffer.

This is a bit of a weird driver. But we should distinguish the default
ring size, which yes, should not be too large, and max ring size which
can be large but user setting a large size risks the fact the
allocations will fail and device will not open.

This driver seems to read the default size from the hypervisor, is that
the value that is too large in your case? Maybe we should min() it with
something reasonable? The max allowed to be set via ethtool can remain
high IMO

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ