lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZCsaY5ZmXlc9/5lT@calimero.vinschen.de>
Date:   Mon, 3 Apr 2023 20:26:43 +0200
From:   Corinna Vinschen <vinschen@...hat.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     Giuseppe Cavallaro <peppe.cavallaro@...com>,
        Alexandre Torgue <alexandre.torgue@...s.st.com>,
        Jose Abreu <joabreu@...opsys.com>, netdev@...r.kernel.org
Subject: Re: [PATCH net-next] net: stmmac: publish actual MTU restriction

On Apr  3 20:17, Corinna Vinschen wrote:
> On Apr  3 09:30, Jakub Kicinski wrote:
> > On Mon, 3 Apr 2023 11:12:12 +0200 Corinna Vinschen wrote:
> > > > Are any users depending on the advertised values being exactly right?  
> > > 
> > > The max MTU is advertised per interface:
> > > 
> > > p -d link show dev enp0s29f1
> > > 2: enp0s29f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
> > >     link/ether [...] promiscuity 0 minmtu 46 maxmtu 9000 [...]
> > > 
> > > So the idea is surely that the user can check it and then set the MTU
> > > accordingly.  If the interface claims a max MTU of 9000, the expectation
> > > is that setting the MTU to this value just works, right?
> > > 
> > > So isn't it better if the interface only claims what it actually supports,
> > > i. .e, 
> > > 
> > >   # ip -d link show dev enp0s29f1
> > >   2: enp0s29f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
> > >       link/ether [...] promiscuity 0 minmtu 46 maxmtu 4096 [...]
> > > 
> > > ?
> > 
> > No doubt that it's better to be more precise.
> > 
> > The question is what about drivers which can't support full MTU with
> > certain features enabled. So far nobody has been updating the max MTU
> > dynamically, to my knowledge, so the max MTU value is the static max
> > under best conditions.
> 
> Yeah, I agree, but what this code does *is* to set the max MTU to the
> best possible value.
> 
> In all(*) drivers using the stmmac core, the max MTU is restricted by
> the size of a single TX queue per the code in stmmac_change_mtu().
> 
> You can change the number of queues within the limits of the HW
> dynamically, but the size of the queues is not configurable.

Wait.  The one thing which never changes after probe is the value
of tx_fifo_size.  It's the size of the buffer for all queues.  And if I
lower tx_queues_to_use, then the value of the queue size computed as

  tx_fifo_size / tx_queues_to_use

actually depends on the number of queues.

So I totally misinterpreted the code at this point.

Ouch.

I withdraw the patch.  Sorry for wasting your time.


Corinna

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ