lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20071017022303.GA27457@linux-os.sc.intel.com> Date: Tue, 16 Oct 2007 19:23:03 -0700 From: "Siddha, Suresh B" <suresh.b.siddha@...el.com> To: Ken Chen <kenchen@...gle.com> Cc: Ingo Molnar <mingo@...e.hu>, Nick Piggin <nickpiggin@...oo.com.au>, "Siddha, Suresh B" <suresh.b.siddha@...el.com>, Andrew Morton <akpm@...ux-foundation.org>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org> Subject: Re: [patch] sched: fix improper load balance across sched domain On Tue, Oct 16, 2007 at 12:07:06PM -0700, Ken Chen wrote: > We recently discovered a nasty performance bug in the kernel CPU load > balancer where we were hit by 50% performance regression. > > When tasks are assigned to a subset of CPUs that span across > sched_domains (either ccNUMA node or the new multi-core domain) via > cpu affinity, kernel fails to perform proper load balance at > these domains, due to several logic in find_busiest_group() miss > identified busiest sched group within a given domain. This leads to > inadequate load balance and causes 50% performance hit. > > To give you a concrete example, on a dual-core, 2 socket numa system, > there are 4 logical cpu, organized as: oops, this issue can easily happen when cores are not sharing caches. I think this is what happening on your setup, right? thanks, suresh - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists