lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2abb44f7a40b3e80a2f869205cf981b975cc3bd7.camel@intel.com>
Date:   Thu, 21 Nov 2019 23:09:06 -0800
From:   Jeff Kirsher <jeffrey.t.kirsher@...el.com>
To:     Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc:     davem@...emloft.net, Henry Tieman <henry.w.tieman@...el.com>,
        netdev@...r.kernel.org, nhorman@...hat.com, sassmann@...hat.com,
        Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
        Tony Nguyen <anthony.l.nguyen@...el.com>,
        Andrew Bowers <andrewx.bowers@...el.com>
Subject: Re: [net-next 13/15] ice: Implement ethtool ops for channels

On Thu, 2019-11-21 at 14:42 -0800, Jakub Kicinski wrote:
> On Wed, 20 Nov 2019 23:46:10 -0800, Jeff Kirsher wrote:
> > +	curr_combined = ice_get_combined_cnt(vsi);
> > +
> > +	/* these checks are for cases where user didn't specify a
> > particular
> > +	 * value on cmd line but we get non-zero value anyway via
> > +	 * get_channels(); look at ethtool.c in ethtool repository (the
> > user
> > +	 * space part), particularly, do_schannels() routine
> > +	 */
> > +	if (ch->rx_count == vsi->num_rxq - curr_combined)
> > +		ch->rx_count = 0;
> > +	if (ch->tx_count == vsi->num_txq - curr_combined)
> > +		ch->tx_count = 0;
> > +	if (ch->combined_count == curr_combined)
> > +		ch->combined_count = 0;
> > +
> > +	if (!(ch->combined_count || (ch->rx_count && ch->tx_count))) {
> > +		netdev_err(dev, "Please specify at least 1 Rx and 1 Tx
> > channel\n");
> > +		return -EINVAL;
> > +	}
> > +
> > +	new_rx = ch->combined_count + ch->rx_count;
> > +	new_tx = ch->combined_count + ch->tx_count;
> 
> The combined vs individual count logic looks correct to me which is not
> common, so nice to see that!
> 
> > +	if (new_rx > ice_get_max_rxq(pf)) {
> > +		netdev_err(dev, "Maximum allowed Rx channels is %d\n",
> > +			   ice_get_max_rxq(pf));
> > +		return -EINVAL;
> > +	}
> > +	if (new_tx > ice_get_max_txq(pf)) {
> > +		netdev_err(dev, "Maximum allowed Tx channels is %d\n",
> > +			   ice_get_max_txq(pf));
> > +		return -EINVAL;
> > +	}
> > +
> > +	ice_vsi_recfg_qs(vsi, new_rx, new_tx);
> > +
> > +	if (new_rx)
> > +		return ice_vsi_set_dflt_rss_lut(vsi, new_rx);
> 
> But I don't see you doing a netif_is_rxfh_configured() check, which is
> supposed to prevent reconfiguring the RSS indirection table if it was
> set up manually by the user.

Talking with Henry it appears Jake's kernel changes were done in parallel
or after the work that was done in the driver.  There appears to be more
code that just adding a simple check, is this a *required* change for
acceptance.  We are currently looking into making this fix, but due to the
upcoming holidays, not sure if we can make the desired change before the
merge window closes.  So I see the following options:

- accept patch as is, and expect a fix in the next patch submission
- drop this patch since the fix will not be available before the merge
window closes for net-next

While I prefer the first option because we are also submitting the "new"
virtio_bus patches, which depend heavily on these changes upstream, I think
we can drop this patch (if necessary) and be fine to submit the other
patches with little thrash.

Which option is preferred or acceptable to you Dave and Jakub?

> This is doubly sad, because I believe that part of RSS infrastructure
> was added by Jake Keller from Intel, so it should be caught by your
> internal review, not to say tests.. :(
> 
> > +	return 0;
> > +}


Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ