lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070613012746.GH8544@linux.vnet.ibm.com>
Date:	Tue, 12 Jun 2007 18:27:46 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
	Thomas Gleixner <tglx@...utronix.de>,
	Dinakar Guniguntala <dino@...ibm.com>
Subject: Re: v2.6.21.4-rt11

On Tue, Jun 12, 2007 at 11:37:58PM +0200, Ingo Molnar wrote:
> 
> * Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:
> 
> > Not a biggie for me, since I can easily do the taskset commands to 
> > force the processes to spread out, but I am worried that casual users 
> > of rcutorture won't know to do this -- thus not really torturing RCU. 
> > It would not be hard to modify rcutorture to affinity the tasks so as 
> > to spread them, but this seems a bit ugly.
> 
> does it get any better if you renice them from +19 to 0? (and then back 
> to +19?)

Interesting!

That did spread them evenly across two CPUs, but not across all four.

I took a look at CFS, which seems to operate in terms of milliseconds.
Since the rcu_torture_reader() code enters the scheduler on each
interation, it would not give CFS millisecond-scale bursts of CPU
consumption, perhaps not allowing it to do reasonable load balancing.

So I inserted the following code at the beginning of rcu_torture_reader():

	set_user_nice(current, 19);
	set_user_nice(current, 0);
	for (idx = 0; idx < 1000; idx++) {
		udelay(10);
	}
	set_user_nice(current, 19);

This worked much better:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
18600 root      39  19     0    0    0 R   50  0.0   0:09.57 rcu_torture_rea    
18599 root      39  19     0    0    0 R   50  0.0   0:09.56 rcu_torture_rea    
18598 root      39  19     0    0    0 R   49  0.0   0:10.33 rcu_torture_rea    
18602 root      39  19     0    0    0 R   49  0.0   0:10.34 rcu_torture_rea    
18596 root      39  19     0    0    0 R   47  0.0   0:09.48 rcu_torture_rea    
18601 root      39  19     0    0    0 R   46  0.0   0:09.56 rcu_torture_rea    
18595 root      39  19     0    0    0 R   45  0.0   0:09.23 rcu_torture_rea    
18597 root      39  19     0    0    0 R   44  0.0   0:10.92 rcu_torture_rea    
18590 root      39  19     0    0    0 R   10  0.0   0:02.23 rcu_torture_wri    
18591 root      39  19     0    0    0 D    2  0.0   0:00.34 rcu_torture_fak    
18592 root      39  19     0    0    0 D    2  0.0   0:00.35 rcu_torture_fak    
18593 root      39  19     0    0    0 D    2  0.0   0:00.35 rcu_torture_fak    
18594 root      39  19     0    0    0 D    2  0.0   0:00.33 rcu_torture_fak    
18603 root      15  -5     0    0    0 S    1  0.0   0:00.06 rcu_torture_sta    

(The first eight tasks are readers, while the last six tasks are update
and statistics threads that don't consume so much CPU, so the above is
pretty close to optimal.)

I stopped and restarted rcutorture several times, and it spread nicely
each time, at least aside from the time that makewhatis decided to fire
up just as I started rcutorture.

But this is admittedly a -very- crude hack.

One approach would be to make them all spin until a few milliseconds
after the last one was created.  I would like to spread the readers
separately from the other tasks, which could be done by taking a two-stage
approach, spreading the writer and fakewriter tasks first, then spreading
the readers.  This seems a bit nicer, and I will play with it a bit.

In the meantime, thoughts on more-maintainable ways of making this work?

						Thanx, Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ