lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 Sep 2006 10:37:34 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Christoph Lameter <clameter@....com>
Cc:	"Siddha, Suresh B" <suresh.b.siddha@...el.com>, akpm@...l.org,
	linux-kernel@...r.kernel.org, mingo@...e.hu
Subject: Re: [PATCH] Fix longstanding load balancing bug in the scheduler.

On Sat, Sep 09, 2006 at 12:56:16PM -0700, Christoph Lameter wrote:
> On Fri, 8 Sep 2006, Christoph Lameter wrote:
> 
> > > one or more, it is unnecessary for the common case.
> > 
> > The common case is an arch with much less cpus. The maxinum on i386
> > f.e. is 255 meaning 8 bytes. That fits in the cacheline that is already
> > used for the stack frame of the calling function. 
> 
> Ughh. Wrong. 255 cpus require 32 bytes. System rarely have that 
> much. If you configure a kernel with less than 32 cpus then this will be 
> one word on the stack.
> 
> Also note that the patch restricts the search to online cpus. The 
> scheduler will check offline cpus without this patch. That may actually 
> result in speed improvements since the cachelines from offline cpus are
> no longer brought in during the search for the busiest group / cpu.

This should not be the case. The sched-domain structure should always
reflect the online CPUs only (see our hotplug cpu handler), and if you
find otherwise then that would be a bug.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ