lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230403093011.27545760@kernel.org>
Date:   Mon, 3 Apr 2023 09:30:11 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Corinna Vinschen <vinschen@...hat.com>
Cc:     Giuseppe Cavallaro <peppe.cavallaro@...com>,
        Alexandre Torgue <alexandre.torgue@...s.st.com>,
        Jose Abreu <joabreu@...opsys.com>, netdev@...r.kernel.org
Subject: Re: [PATCH net-next] net: stmmac: publish actual MTU restriction

On Mon, 3 Apr 2023 11:12:12 +0200 Corinna Vinschen wrote:
> > Are any users depending on the advertised values being exactly right?  
> 
> The max MTU is advertised per interface:
> 
> p -d link show dev enp0s29f1
> 2: enp0s29f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
>     link/ether [...] promiscuity 0 minmtu 46 maxmtu 9000 [...]
> 
> So the idea is surely that the user can check it and then set the MTU
> accordingly.  If the interface claims a max MTU of 9000, the expectation
> is that setting the MTU to this value just works, right?
> 
> So isn't it better if the interface only claims what it actually supports,
> i. .e, 
> 
>   # ip -d link show dev enp0s29f1
>   2: enp0s29f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
>       link/ether [...] promiscuity 0 minmtu 46 maxmtu 4096 [...]
> 
> ?

No doubt that it's better to be more precise.

The question is what about drivers which can't support full MTU with
certain features enabled. So far nobody has been updating the max MTU
dynamically, to my knowledge, so the max MTU value is the static max
under best conditions.

> > > +	/* stmmac_change_mtu restricts MTU to queue size.
> > > +	 * Set maxmtu accordingly, if it hasn't been set from DT.
> > > +	 */
> > > +	if (priv->plat->maxmtu == 0) {
> > > +		priv->plat->maxmtu = priv->plat->tx_fifo_size ?:
> > > +				     priv->dma_cap.tx_fifo_size;
> > > +		priv->plat->maxmtu /= priv->plat->tx_queues_to_use;  
> > 
> > tx_queues_to_use may change due to reconfiguration, no?
> > What will happen then?  
> 
> Nothing.  tx_fifo_size is tx_queues_to_use multiplied by the size of the
> queue.  All the above code does is to compute the size of the queues,
> which is a fixed value limiting the size of the MTU.  It's the same
> check the stmmac_change_mtu() function performs to allow or deny the MTU
> change, basically:
> 
>   txfifosz = priv->plat->tx_fifo_size;
>   if (txfifosz == 0)
>     txfifosz = priv->dma_cap.tx_fifo_size;
>   txfifosz /= priv->plat->tx_queues_to_use;
>   if (txfifosz < new_mtu)
>     return -EINVAL;

I haven't looked at the code in detail but if we start with
tx_queues_to_use = 4 and lower it via ethtool -L, won't that
make the core prevent setting higher MTU even tho the driver
would have supported it previously?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ