lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070611144401.GA9102@linux.vnet.ibm.com>
Date:	Mon, 11 Jun 2007 07:44:02 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
	Thomas Gleixner <tglx@...utronix.de>,
	Dinakar Guniguntala <dino@...ibm.com>
Subject: Re: v2.6.21.4-rt11

On Mon, Jun 11, 2007 at 09:36:34AM +0200, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:
> 
> > 2.6.21.4-rt12 boots on 4-CPU Opteron and passes several hours of 
> > rcutorture.  However, if I simply do "modprobe rcutorture", the kernel 
> > threads do not spread across the CPUs as I would expect them to, even 
> > given CFS.  Instead, the readers all stack up on a single CPU, and I 
> > have to use the "taskset" command to spread them out manually.  Is 
> > there some config parameter I am missing out on?
> 
> hm, what affinity do they start out with? Could they all be pinned to 
> CPU#0 by default?

They start off with affinity masks of 0xf on a 4-CPU system.  I would
expect them to load-balance across the four CPUs, but they stay all
on the same CPU until long after I lose patience (many minutes).

Since there are eight readers, I use the following commands:

	taskset -p 3 pid1
	taskset -p 3 pid2
	taskset -p 6 pid3
	taskset -p 6 pid4
	taskset -p c pid5
	taskset -p c pid6
	taskset -p 9 pid7
	taskset -p 9 pid8

where the "pidn" are all replaced by the pids of the torture readers.

Before I do this, the processes are all sharing a single CPU.  After I
do this, they are spread reasonably nicely over the CPUs.  I do need to
allow some migration in order to fully test the realtime RCU variants
in the various preemption scenarios.

							Thanx, Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ