lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <53C7B247.2070309@arm.com> Date: Thu, 17 Jul 2014 13:23:51 +0200 From: Dietmar Eggemann <dietmar.eggemann@....com> To: Peter Zijlstra <peterz@...radead.org> CC: Bruno Wolff III <bruno@...ff.to>, Josh Boyer <jwboyer@...hat.com>, "mingo@...hat.com" <mingo@...hat.com>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org> Subject: Re: Scheduler regression from caffcdd8d27ba78730d5540396ce72ad022aff2c On 17/07/14 11:04, Peter Zijlstra wrote: > On Thu, Jul 17, 2014 at 10:57:55AM +0200, Dietmar Eggemann wrote: >> There is also the possibility that the memory for sched_group sg is not >> (completely) zeroed out: >> >> sg = kzalloc_node(sizeof(struct sched_group) + cpumask_size(), >> GFP_KERNEL, cpu_to_node(j)); >> >> >> struct sched_group { >> ... >> * NOTE: this field is variable length. (Allocated dynamically >> * by attaching extra space to the end of the structure, >> * depending on how many CPUs the kernel has booted up with) >> */ >> unsigned long cpumask[0]; > > well kZalloc should Zero the entire allocated size, and the specified > size very much includes the cpumask size as per: > sizeof(struct sched_group) + cpumask_size() Yes, I think so too. > > But yeah, I'm also a bit puzzled why this goes bang. Makes we worry we > scribble it somewhere or so. > But then, this must be happening in build_sched_domains() between __visit_domain_allocation_hell()->__sdt_alloc() and build_sched_groups(). Couldn't catch this phenomena by adding a fake SMT level (just a copy of the real MC level) to my ARM TC2 (dual cluster dual/triple core, no hyper-threading) to provoke sd degenerate. It does not show the issue and MC level gets degenerated nicely. Might not be the real example since SMT and MC are using the same cpu mask here). @@ -281,6 +281,7 @@ static inline const int cpu_corepower_flags(void) } static struct sched_domain_topology_level arm_topology[] = { + { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(SMT) }, #ifdef CONFIG_SCHED_MC { cpu_corepower_mask, cpu_corepower_flags, SD_INIT_NAME(GMC) }, { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, Maybe by enabling sched_debug on command line (earlyprintk=keep sched_debug), Bruno could spot topology setup issues on his XEON machine which could lead to this problem unless the sg cpumask gets zeroed out in build_sched_groups() a second time ? Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz dmesg snippet as an example when booted with 'earlyprintk=keep sched_debug': ... [ 0.119737] CPU0 attaching sched-domain: [ 0.119740] domain 0: span 0-1 level SIBLING [ 0.119742] groups: 0 (cpu_power = 588) 1 (cpu_power = 588) [ 0.119745] domain 1: span 0-3 level MC [ 0.119747] groups: 0-1 (cpu_power = 1176) 2-3 (cpu_power = 1176) ... -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists