lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Feb 2010 09:20:58 +1100
From:	Michael Neuling <mikey@...ling.org>
To:	Peter Zijlstra <peterz@...radead.org>
cc:	Joel Schopp <jschopp@...tin.ibm.com>, Ingo Molnar <mingo@...e.hu>,
	linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
	ego@...ibm.com
Subject: Re: [PATCHv4 2/2] powerpc: implement arch_scale_smt_power for Power7

> Suppose for a moment we have 2 threads (hot-unplugged thread 1 and 3, we
> can construct an equivalent but more complex example for 4 threads), and
> we have 4 tasks, 3 SCHED_OTHER of equal nice level and 1 SCHED_FIFO, the
> SCHED_FIFO task will consume exactly 50% walltime of whatever cpu it
> ends up on.
> 
> In that situation, provided that each cpu's cpu_power is of equal
> measure, scale_rt_power() ensures that we run 2 SCHED_OTHER tasks on the
> cpu that doesn't run the RT task, and 1 SCHED_OTHER task next to the RT
> task, so that each task consumes 50%, which is all fair and proper.
> 
> However, if you do the above, thread 0 will have +75% = 1.75 and thread
> 2 will have -75% = 0.25, then if the RT task will land on thread 0,
> we'll be having: 0.875 vs 0.25, or on thread 3, 1.75 vs 0.125. In either
> case thread 0 will receive too many (if not all) SCHED_OTHER tasks.
> 
> That is, unless these threads 2 and 3 really are _that_ weak, at which
> point one wonders why IBM bothered with the silicon ;-)

Peter,

2 & 3 aren't weaker than 0 & 1 but.... 

The core has dynamic SMT mode switching which is controlled by the
hypervisor (IBM's PHYP).  There are 3 SMT modes:
	SMT1 uses thread  0
	SMT2 uses threads 0 & 1
	SMT4 uses threads 0, 1, 2 & 3
When in any particular SMT mode, all threads have the same performance
as each other (ie. at any moment in time, all threads perform the same).  

The SMT mode switching works such that when linux has threads 2 & 3 idle
and 0 & 1 active, it will cede (H_CEDE hypercall) threads 2 and 3 in the
idle loop and the hypervisor will automatically switch to SMT2 for that
core (independent of other cores).  The opposite is not true, so if
threads 0 & 1 are idle and 2 & 3 are active, we will stay in SMT4 mode.

Similarly if thread 0 is active and threads 1, 2 & 3 are idle, we'll go
into SMT1 mode.  

If we can get the core into a lower SMT mode (SMT1 is best), the threads
will perform better (since they share less core resources).  Hence when
we have idle threads, we want them to be the higher ones.

So to answer your question, threads 2 and 3 aren't weaker than the other
threads when in SMT4 mode.  It's that if we idle threads 2 & 3, threads
0 & 1 will speed up since we'll move to SMT2 mode.

I'm pretty vague on linux scheduler details, so I'm a bit at sea as to
how to solve this.  Can you suggest any mechanisms we currently have in
the kernel to reflect these properties, or do you think we need to
develop something new?  If so, any pointers as to where we should look?

Thanks,
Mikey
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ