lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140624211545.GA4603@linux.vnet.ibm.com>
Date:	Tue, 24 Jun 2014 14:15:45 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Dave Hansen <dave.hansen@...el.com>
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
	rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
	dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com,
	ak@...ux.intel.com, cl@...two.org, umgwanakikbuti@...il.com
Subject: Re: [PATCH tip/core/rcu] Reduce overhead of cond_resched() checks
 for RCU

On Tue, Jun 24, 2014 at 01:43:16PM -0700, Dave Hansen wrote:
> On 06/23/2014 05:39 PM, Paul E. McKenney wrote:
> > On Mon, Jun 23, 2014 at 05:20:30PM -0700, Dave Hansen wrote:
> >> On 06/23/2014 05:15 PM, Paul E. McKenney wrote:
> >>> Just out of curiosity, how many CPUs does your system have?  80?
> >>> If 160, looks like something bad is happening at 80.
> >>
> >> 80 cores, 160 threads.  >80 processes/threads is where we start using
> >> the second thread on the cores.  The tasks are also pinned to
> >> hyperthread pairs, so they disturb each other, and the scheduler moves
> >> them between threads on occasion which causes extra noise.
> > 
> > OK, that could explain the near flattening of throughput near 80
> > processes.  Is 3.16.0-rc1-pf2 with the two RCU patches?  If so, is the
> > new sysfs parameter at its default value?
> 
> Here's 3.16-rc1 with e552592e applied and jiffies_till_sched_qs=12 vs. 3.15:
> 
> > https://www.sr71.net/~dave/intel/bb.html?2=3.16.0-rc1-paultry2-jtsq12&1=3.15
> 
> 3.16-rc1 is actually in the lead up until the end when we're filling up
> the hyperthreads.  The same pattern holds when comparing
> 3.16-rc1+e552592e to 3.16-rc1 with ac1bea8 reverted:
> 
> > https://www.sr71.net/~dave/intel/bb.html?2=3.16.0-rc1-paultry2-jtsq12&1=3.16.0-rc1-wrevert
> 
> So, the current situation is generally _better_ than 3.15, except during
> the noisy ranges of the test where hyperthreading and the scheduler are
> coming in to play.

Good to know that my intuition is not yet completely broken.  ;-)

>                     I made the mistake of doing all my spot-checks at
> the 160-thread number, which honestly wasn't the best point to be
> looking at.

That would do it!  ;-)

> At this point, I'm satisfied with how e552592e is dealing with the
> original regression.  Thanks for all the prompt attention on this one, Paul.

Glad it worked out, I have sent a pull request to Ingo to hopefully
get this into 3.16.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ