lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Aug 2017 06:49:39 +0000
From:   "Ofer Levi(SW)" <oferle@...lanox.com>
To:     Peter Zijlstra <peterz@...radead.org>
CC:     "rusty@...tcorp.com.au" <rusty@...tcorp.com.au>,
        "vatsa@...ibm.com" <vatsa@...ibm.com>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "Vineet.Gupta1@...opsys.com" <Vineet.Gupta1@...opsys.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: hotplug support for arch/arc/plat-eznps platform


> On Monday, August 7, 2017 6:10 PM +0000, Ofer Levi(SW) wrote:
> 
> On Mon, Aug 07, 2017 at 01:41:38PM +0000, Ofer Levi(SW) wrote:
> > > You've failed to explain why you think hotplug should be a
> > > performance critical path.
> > 1. hotplug bring up of 4K cpus takes 40 minutes.  Way too much for any
> user.
> > 2. plat-eznps is a network processor, where bring up time is sensitive.
> 
> But who is doing actual hotplug? Why would you ever unplug or plug a CPU in
> a time critical situation?

The idea behind implementing hotplug for this arch is to shorten time to traffic processing. 
This way instead of waiting ~5 min for all cpus to boot, application running on cpu 0 will 
Loop booting other cpus and assigning  the traffic processing application to it. 
Outgoing traffic will build up until all cpus are up and running full traffic rate.
This method allow for traffic processing to start after ~20 sec instead of the 5 min.

> 
> > > I'm also not seeing how it would be different from boot; you'd be
> > > looking at a similar cost for SMP bringup.
> > bring up time of 4k cpus during kernel boot takes 4.5 minutes.
> > The function in question is performed only when smp init was performed.
> > If I understand correctly, whatever this function is doing is
> > performed after all cpus were brought up during kernel boot.
> 
> Doesn't make sense. If you look at smp_init() boot brings up the CPUs one at
> a time.
> 
> So how can boot be different than hot-pugging them?

Please have a look at following code kernel/sched/core.c, sched_cpu_activate() :

	if (sched_smp_initialized) {
		sched_domains_numa_masks_set(cpu);
		cpuset_cpu_active();
	}
The cpuset_cpu_active call eventually leads to the function in question partition_sched_domains()
When cold-booting cpus the sched_smp_initialized flag is false and therefore partition_sched_domains is not executing.

This leads me back to my questions


Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ