lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 29 May 2013 13:06:20 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Andrew Jones <drjones@...hat.com>
Cc:	tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
	x86@...nel.org, fenghua.yu@...el.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/x86: construct all sibling maps if smt

On Wed, May 29, 2013 at 12:26:01PM +0200, Andrew Jones wrote:
> On Mon, May 27, 2013 at 07:09:00PM +0200, Andrew Jones wrote:
> > Commit 316ad248307fb ("sched/x86: Rewrite set_cpu_sibling_map()") broke
> > the construction of sibling maps, which also broke the booted_cores
> > accounting.
> > 
> > Before the rewrite, if smt was present, then each map was updated for
> > each smt sibling. After the rewrite only cpu_sibling_mask gets updated,
> > as the llc and core maps depend on 'has_mc = x86_max_cores > 1' instead.
> > This leads to problems with topologies like the following
> > 
> > (qemu -smp sockets=2,cores=1,threads=2)
> > 
> > processor	: 0
> > physical id	: 0
> > siblings	: 1    <= should be 2
> > core id		: 0
> > cpu cores	: 1
> > 
> > processor	: 1
> > physical id	: 0
> > siblings	: 1    <= should be 2
> > core id		: 0
> > cpu cores	: 0    <= should be 1
> > 
> > processor	: 2
> > physical id	: 1
> > siblings	: 1    <= should be 2
> > core id		: 0
> > cpu cores	: 1
> > 
> > processor	: 3
> > physical id	: 1
> > siblings	: 1    <= should be 2
> > core id		: 0
> > cpu cores	: 0    <= should be 1
> > 
> > This patch restores the former construction by defining has_mc as
> > (has_smt || x86_max_cores > 1). This should be fine as there were no
> > (has_smt && !has_mc) conditions in the context.
> > 
> > Signed-off-by: Andrew Jones <drjones@...hat.com>
> > ---
> >  arch/x86/kernel/smpboot.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> > index 9c73b51817e47..886a3234eaff3 100644
> > --- a/arch/x86/kernel/smpboot.c
> > +++ b/arch/x86/kernel/smpboot.c
> > @@ -372,15 +372,15 @@ static bool __cpuinit match_mc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
> >  
> >  void __cpuinit set_cpu_sibling_map(int cpu)
> >  {
> > -	bool has_mc = boot_cpu_data.x86_max_cores > 1;
> >  	bool has_smt = smp_num_siblings > 1;
> > +	bool has_mc = has_smt || boot_cpu_data.x86_max_cores > 1;
> >  	struct cpuinfo_x86 *c = &cpu_data(cpu);
> >  	struct cpuinfo_x86 *o;
> >  	int i;
> >  
> >  	cpumask_set_cpu(cpu, cpu_sibling_setup_mask);
> >  
> > -	if (!has_smt && !has_mc) {
> > +	if (!has_mc) {
> >  		cpumask_set_cpu(cpu, cpu_sibling_mask(cpu));
> >  		cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu));
> >  		cpumask_set_cpu(cpu, cpu_core_mask(cpu));
> > -- 
> > 1.8.1.4
> >
> 
> Any acks? This patch fixes a regression. Also, in case anybody is
> wondering, this is not the same regression as was already fixed with
> 
> ceb1cbac8eda6 sched/x86: Calculate booted cores after construction of sibling_mask
> 
> (Hmm, I probably should have renamed has_mc to has_mp, as the redefinition
> expands its scope. I'm not sure if that deserves a v2 though.)

Right, took me a while to bend my brain around that code again -- I
obviously don't have the best track record since this is the second bug
in it since I rewrote the thing (with the intent of making it 'easier'
to read ha!).

Yes, I think your patch is correct, and your suggestion of doing
s/has_mc/has_mp/ seems a sensible one too.

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists