[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190807001616.GA169551@google.com>
Date: Tue, 6 Aug 2019 20:16:16 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com
Subject: Re: [PATCH RFC tip/core/rcu 02/14] rcu/nocb: Add bypass callback
queueing
On Tue, Aug 06, 2019 at 08:03:13PM -0400, Joel Fernandes wrote:
> On Fri, Aug 02, 2019 at 08:14:49AM -0700, Paul E. McKenney wrote:
> > Use of the rcu_data structure's segmented ->cblist for no-CBs CPUs
> > takes advantage of unrelated grace periods, thus reducing the memory
> > footprint in the face of floods of call_rcu() invocations. However,
> > the ->cblist field is a more-complex rcu_segcblist structure which must
> > be protected via locking. Even though there are only three entities
> > which can acquire this lock (the CPU invoking call_rcu(), the no-CBs
> > grace-period kthread, and the no-CBs callbacks kthread), the contention
> > on this lock is excessive under heavy stress.
> >
> > This commit therefore greatly reduces contention by provisioning
> > an rcu_cblist structure field named ->nocb_bypass within the
> > rcu_data structure. Each no-CBs CPU is permitted only a limited
> > number of enqueues onto the ->cblist per jiffy, controlled by a new
> > nocb_nobypass_lim_per_jiffy kernel boot parameter that defaults to
> > about 16 enqueues per millisecond (16 * 1000 / HZ). When that limit is
> > exceeded, the CPU instead enqueues onto the new ->nocb_bypass.
>
> Looks quite interesting. I am guessing the not-no-CB (regular) enqueues don't
> need to use the same technique because both enqueues / callback execution are
> happening on same CPU..
>
> Still looking through patch but I understood the basic idea. Some nits below:
>
> [snip]
> > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> > index 2c3e9068671c..e4df86db8137 100644
> > --- a/kernel/rcu/tree.h
> > +++ b/kernel/rcu/tree.h
> > @@ -200,18 +200,26 @@ struct rcu_data {
> > atomic_t nocb_lock_contended; /* Contention experienced. */
> > int nocb_defer_wakeup; /* Defer wakeup of nocb_kthread. */
> > struct timer_list nocb_timer; /* Enforce finite deferral. */
> > + unsigned long nocb_gp_adv_time; /* Last call_rcu() CB adv (jiffies). */
> > +
> > + /* The following fields are used by call_rcu, hence own cacheline. */
> > + raw_spinlock_t nocb_bypass_lock ____cacheline_internodealigned_in_smp;
> > + struct rcu_cblist nocb_bypass; /* Lock-contention-bypass CB list. */
> > + unsigned long nocb_bypass_first; /* Time (jiffies) of first enqueue. */
> > + unsigned long nocb_nobypass_last; /* Last ->cblist enqueue (jiffies). */
> > + int nocb_nobypass_count; /* # ->cblist enqueues at ^^^ time. */
>
> Can these and below fields be ifdef'd out if !CONFIG_RCU_NOCB_CPU so as to
> keep the size of struct smaller for benefit of systems that don't use NOCB?
>
>
> >
> > /* The following fields are used by GP kthread, hence own cacheline. */
> > raw_spinlock_t nocb_gp_lock ____cacheline_internodealigned_in_smp;
> > - bool nocb_gp_sleep;
> > - /* Is the nocb GP thread asleep? */
> > + struct timer_list nocb_bypass_timer; /* Force nocb_bypass flush. */
> > + bool nocb_gp_sleep; /* Is the nocb GP thread asleep? */
>
> And these too, I think.
Please ignore this comment, I missed that these were already ifdef'd out
since it did not appear in the diff.
thanks!
Powered by blists - more mailing lists