lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CO6PR18MB387313CF1DB7B16D015043CCB0AC9@CO6PR18MB3873.namprd18.prod.outlook.com>
Date:   Sun, 10 Jan 2021 18:24:30 +0000
From:   Stefan Chulski <stefanc@...vell.com>
To:     Russell King - ARM Linux admin <linux@...linux.org.uk>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "thomas.petazzoni@...tlin.com" <thomas.petazzoni@...tlin.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        Nadav Haklai <nadavh@...vell.com>,
        Yan Markman <ymarkman@...vell.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "mw@...ihalf.com" <mw@...ihalf.com>,
        "andrew@...n.ch" <andrew@...n.ch>,
        "atenart@...nel.org" <atenart@...nel.org>
Subject: RE: [EXT] Re: [PATCH RFC net-next  11/19] net: mvpp2: add flow
 control RXQ and BM pool config callbacks

> >
> > +/* Routine calculate single queue shares address space */ static int
> > +mvpp22_calc_shared_addr_space(struct mvpp2_port *port) {
> > +	/* If number of CPU's greater than number of threads, return last
> > +	 * address space
> > +	 */
> > +	if (num_active_cpus() >= MVPP2_MAX_THREADS)
> > +		return MVPP2_MAX_THREADS - 1;
> > +
> > +	return num_active_cpus();
> 
> Firstly - this can be written as:
> 
> 	return min(num_active_cpus(), MVPP2_MAX_THREADS - 1);

OK.

> Secondly - what if the number of active CPUs change, for example due to
> hotplug activity. What if we boot with maxcpus=1 and then bring the other
> CPUs online after networking has been started? The number of active CPUs is
> dynamically managed via the scheduler as CPUs are brought online or offline.
> 
> > +/* Routine enable flow control for RXQs conditon */ void
> > +mvpp2_rxq_enable_fc(struct mvpp2_port *port)
> ...
> > +/* Routine disable flow control for RXQs conditon */ void
> > +mvpp2_rxq_disable_fc(struct mvpp2_port *port)
> 
> Nothing seems to call these in this patch, so on its own, it's not obvious how
> these are being called, and therefore what remedy to suggest for
> num_active_cpus().

I don't think that current driver support CPU hotplug, anyway I can remove  num_active_cpus
and just use shared RX IRQ ID.

Thanks.
. 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ