lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170714040553.GR2393@linux.vnet.ibm.com>
Date:   Thu, 13 Jul 2017 21:05:53 -0700
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     "Li, Aubrey" <aubrey.li@...ux.intel.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Frederic Weisbecker <fweisbec@...il.com>,
        Christoph Lameter <cl@...ux.com>,
        Aubrey Li <aubrey.li@...el.com>, tglx@...utronix.de,
        len.brown@...el.com, rjw@...ysocki.net, tim.c.chen@...ux.intel.com,
        arjan@...ux.intel.com, yang.zhang.wz@...il.com, x86@...nel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH v1 00/11] Create fast idle path for short idle periods

On Fri, Jul 14, 2017 at 11:47:32AM +0800, Li, Aubrey wrote:
> On 2017/7/13 23:20, Paul E. McKenney wrote:
> > On Thu, Jul 13, 2017 at 04:53:11PM +0200, Peter Zijlstra wrote:
> >> On Thu, Jul 13, 2017 at 10:48:55PM +0800, Li, Aubrey wrote:
> >>
> >>> - totally from arch_cpu_idle_enter entry to arch_cpu_idle_exit return costs
> >>>   9122ns - 15318ns.
> >>> ---- In this period(arch idle), rcu_idle_enter costs 1985ns - 2262ns, rcu_idle_exit
> >>>      costs 1813ns - 3507ns
> >>>
> >>> Besides RCU,
> >>
> >> So Paul wants more details on where RCU hurts so we can try to fix.
> > 
> > More specifically: rcu_needs_cpu(), rcu_prepare_for_idle(),
> > rcu_cleanup_after_idle(), rcu_eqs_enter(), rcu_eqs_enter_common(),
> > rcu_dynticks_eqs_enter(), do_nocb_deferred_wakeup(),
> > rcu_dynticks_task_enter(), rcu_eqs_exit(), rcu_eqs_exit_common(),
> > rcu_dynticks_task_exit(), rcu_dynticks_eqs_exit().
> > 
> > The first three (rcu_needs_cpu(), rcu_prepare_for_idle(), and
> > rcu_cleanup_after_idle()) should not be significant unless you have
> > CONFIG_RCU_FAST_NO_HZ=y.  If you do, it would be interesting to learn
> > how often invoke_rcu_core() is invoked from rcu_prepare_for_idle()
> > and rcu_cleanup_after_idle(), as this can raise softirq.  Also
> > rcu_accelerate_cbs() and rcu_try_advance_all_cbs().
> > 
> > Knowing which of these is causing the most trouble might help me
> > reduce the overhead in the current idle path.
> > 
> I don't have details of these functions, I can measure if you want.
> Do you have preferred workload for the measurement?

I do not have a specific workload in mind.  Could you please choose
one with very frequent transitions to and from idle?

> > Also, how big is this system?  If you can say, about what is the cost
> > of a cache miss to some other CPU's cache?
> > 
> The system has two NUMA nodes. nproc returns 104. local memory access is
> ~100 ns and remote memory access is ~200ns, reported by mgen. Does this
> address your question?

Very much so, thank you!  This will allow me to correctly interpret
time spent in the above functions.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ