lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210110183133.GM1551@shell.armlinux.org.uk>
Date:   Sun, 10 Jan 2021 18:31:33 +0000
From:   Russell King - ARM Linux admin <linux@...linux.org.uk>
To:     Stefan Chulski <stefanc@...vell.com>
Cc:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "thomas.petazzoni@...tlin.com" <thomas.petazzoni@...tlin.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        Nadav Haklai <nadavh@...vell.com>,
        Yan Markman <ymarkman@...vell.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "mw@...ihalf.com" <mw@...ihalf.com>,
        "andrew@...n.ch" <andrew@...n.ch>,
        "atenart@...nel.org" <atenart@...nel.org>
Subject: Re: [EXT] Re: [PATCH RFC net-next  11/19] net: mvpp2: add flow
 control RXQ and BM pool config callbacks

On Sun, Jan 10, 2021 at 06:24:30PM +0000, Stefan Chulski wrote:
> > >
> > > +/* Routine calculate single queue shares address space */ static int
> > > +mvpp22_calc_shared_addr_space(struct mvpp2_port *port) {
> > > +	/* If number of CPU's greater than number of threads, return last
> > > +	 * address space
> > > +	 */
> > > +	if (num_active_cpus() >= MVPP2_MAX_THREADS)
> > > +		return MVPP2_MAX_THREADS - 1;
> > > +
> > > +	return num_active_cpus();
> > 
> > Firstly - this can be written as:
> > 
> > 	return min(num_active_cpus(), MVPP2_MAX_THREADS - 1);
> 
> OK.
> 
> > Secondly - what if the number of active CPUs change, for example due to
> > hotplug activity. What if we boot with maxcpus=1 and then bring the other
> > CPUs online after networking has been started? The number of active CPUs is
> > dynamically managed via the scheduler as CPUs are brought online or offline.
> > 
> > > +/* Routine enable flow control for RXQs conditon */ void
> > > +mvpp2_rxq_enable_fc(struct mvpp2_port *port)
> > ...
> > > +/* Routine disable flow control for RXQs conditon */ void
> > > +mvpp2_rxq_disable_fc(struct mvpp2_port *port)
> > 
> > Nothing seems to call these in this patch, so on its own, it's not obvious how
> > these are being called, and therefore what remedy to suggest for
> > num_active_cpus().
> 
> I don't think that current driver support CPU hotplug, anyway I can
> remove  num_active_cpus and just use shared RX IRQ ID.

Sorry, but that is not really a decision the driver can make. It is
part of a kernel that _does_ support CPU hotplug, and the online
CPUs can be changed today.

It is likely that every distro out there builds the kernel with
CPU hotplug enabled.

If changing the online CPUs causes the driver to misbehave, that
is a(nother) bug with the driver.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ