lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53A8B884.6000600@intel.com>
Date:	Mon, 23 Jun 2014 16:30:12 -0700
From:	Dave Hansen <dave.hansen@...el.com>
To:	paulmck@...ux.vnet.ibm.com
CC:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
	rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
	dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com,
	ak@...ux.intel.com, cl@...two.org, umgwanakikbuti@...il.com
Subject: Re: [PATCH tip/core/rcu] Reduce overhead of cond_resched() checks
 for RCU

On 06/23/2014 11:09 AM, Paul E. McKenney wrote:
> So let's see...  The open1 benchmark sits in a loop doing open()
> and close(), and probably spends most of its time in the kernel.
> It doesn't do much context switching.  I am guessing that you don't
> have CONFIG_NO_HZ_FULL=y, or the boot/sysfs parameter would not have
> much effect because then the first quiescent-state-forcing attempt would
> likely finish the grace period.
> 
> So, given that short grace periods help other workloads (I have the
> scars to prove it), and given that the patch fixes some real problems,

I'm not arguing that short grace periods _can_ help some workloads, or
that one is better than the other.  The patch in question changes
existing behavior by shortening grace periods.  This change of existing
behavior removes some of the benefits that my system gets out of RCU.  I
suspect this affects a lot more systems, but my core cout makes it
easier to see.

Perhaps I'm misunderstanding the original patch's intent, but it seemed
to me to be working around an overactive debug message.  While often a
_useful_ debug message, it was firing falsely in the case being
addressed in the patch.

> and given that the large number for rcutree.jiffies_till_sched_qs got
> us within 3%, shouldn't we consider this issue closed?

With the default value for the tunable, the regression is still solidly
over 10%.  I think we can have a reasonable argument about it once the
default delta is down to the small single digits.

One more thing I just realized: this isn't a scalability problem, at
least with rcutree.jiffies_till_sched_qs=12.  There's a pretty
consistent delta in throughput throughout the entire range of threads
from 1->160.  See the "processes" column in the data files:

plain 3.15:
> https://www.sr71.net/~dave/intel/willitscale/systems/bigbox/3.15/open1.csv
e552592e0383bc:
> https://www.sr71.net/~dave/intel/willitscale/systems/bigbox/3.16.0-rc1-pf2/open1.csv

or visually:

> https://www.sr71.net/~dave/intel/array-join.html?1=willitscale/systems/bigbox/3.15&2=willitscale/systems/bigbox/3.16.0-rc1-pf2&hide=linear,threads_idle,processes_idle
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ