lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220506182425.GC1790663@paulmck-ThinkPad-P17-Gen-1>
Date:   Fri, 6 May 2022 11:24:25 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        Neeraj Upadhyay <neeraj.iitr10@...il.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
        bigeasy@...utronix.de
Subject: Re: [PATCH] rcu/nocb: Add an option to ON/OFF an offloading from RT
 context

On Fri, May 06, 2022 at 06:22:26PM +0200, Uladzislau Rezki wrote:
> > On Thu, May 05, 2022 at 12:16:41PM +0200, Uladzislau Rezki (Sony) wrote:
> > > Introduce a RCU_NOCB_CPU_CB_BOOST kernel option. So a user can
> > > decide if an offloading has to be done in a high-prio context or
> > > not. Please note an option depends on RCU_NOCB_CPU and RCU_BOOST
> > > parameters and by default it is off.
> > > 
> > > This patch splits the boosting preempted RCU readers and those
> > > kthreads which directly responsible for driving expedited grace
> > > periods forward with enabling/disabling the offloading from/to
> > > SCHED_FIFO/SCHED_OTHER contexts.
> > > 
> > > The main reason of such split is, for example on Android there
> > > are some workloads which require fast expedited grace period to
> > > be done whereas offloading in RT context can lead to starvation
> > > and hogging a CPU for a long time what is not acceptable for
> > > latency sensitive environment. For instance:
> > > 
> > > <snip>
> > > <...>-60 [006] d..1 2979.028717: rcu_batch_start: rcu_preempt CBs=34619 bl=270
> > > <snip>
> > > 
> > > invoking 34 619 callbacks will take time thus making other CFS
> > > tasks waiting in run-queue to be starved due to such behaviour.
> > > 
> > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
> > 
> > All good points!
> > 
> > Some questions and comments below.
> > 
> > Adding Sebastian on CC for his perspective.
> > 
> > 						Thanx, Paul
> > 
> > > ---
> > >  kernel/rcu/Kconfig     | 14 ++++++++++++++
> > >  kernel/rcu/tree.c      |  5 ++++-
> > >  kernel/rcu/tree_nocb.h |  3 ++-
> > >  3 files changed, 20 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
> > > index 27aab870ae4c..074630b94902 100644
> > > --- a/kernel/rcu/Kconfig
> > > +++ b/kernel/rcu/Kconfig
> > > @@ -275,6 +275,20 @@ config RCU_NOCB_CPU_DEFAULT_ALL
> > >  	  Say Y here if you want offload all CPUs by default on boot.
> > >  	  Say N here if you are unsure.
> > >  
> > > +config RCU_NOCB_CPU_CB_BOOST
> > > +	bool "Perform offloading from real-time kthread"
> > > +	depends on RCU_NOCB_CPU && RCU_BOOST
> > > +	default n
> > 
> > I understand that you need this to default to "n" on your systems.
> > However, other groups already using callback offloading should not see
> > a sudden change.  I don't see an Android-specific defconfig file, but
> > perhaps something in drivers/android/Kconfig?
> > 
> > One easy way to make this work would be to invert the sense of this
> > Kconfig option ("RCU_NOCB_CB_NO_BOOST"?), continue having it default to
> > "n", but then select it somewhere in drivers/android/Kconfig.  But I
> > would not be surprised if there is a better way.
> > 
> It was done deliberately, i mean off by default. Because the user has to
> think before enabling it for its workloads. It is not a big issue for
> kthreads which drive a grace period forward, because their context runtime
> i find pretty short. Whereas an offloading callback kthread can stuck
> for a long time depending on workloads.
> 
> Also, i put it that way because initially those kthreads were staying
> as SCHED_NORMAL even though the RCU_BOOST was set in kernel config.
> 
> <snip>
> commit c8b16a65267e35ecc5621dbc81cbe7e5b0992fce
> Author: Alison Chaiken <achaiken@...ora.tech>
> Date:   Tue Jan 11 15:32:52 2022 -0800
> 
>     rcu: Elevate priority of offloaded callback threads
>     
>     When CONFIG_PREEMPT_RT=y, the rcutree.kthread_prio command-line
>     parameter signals initialization code to boost the priority of rcuc
>     callbacks to the designated value.  With the additional
>     CONFIG_RCU_NOCB_CPU=y configuration and an additional rcu_nocbs
>     command-line parameter, the callbacks on the listed cores are
>     offloaded to new rcuop kthreads that are not pinned to the cores whose
>     post-grace-period work is performed.  While the rcuop kthreads perform
>     the same function as the rcuc kthreads they offload, the kthread_prio
>     parameter only boosts the priority of the rcuc kthreads.  Fix this
>     inconsistency by elevating rcuop kthreads to the same priority as the rcuc
>     kthreads.
>     
>     Signed-off-by: Alison Chaiken <achaiken@...ora.tech>
>     Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> <snip>
> 
> I have a doubt that it is needed for CONFIG_PREEMPT_RT=y. The reason i mentioned
> above it is a source of extra latency. That is why i have made it inactive by default.
> 
> Any thoughts?

My first thought is that Alison does real RT work.  Let's please therefore
avoid assuming that she doesn't know what she is doing.  ;-)

One thing that she knows is that RT workloads usually run the most
latency-sensitive parts of their application at far higher priority
than they do the rcuo[ps] kthreads.  This means that they do not have
the same issues with these kthreads that you see.

> > > +	help
> > > +	  Use this option to offload callbacks from the SCHED_FIFO context
> > > +	  to make the process faster. As a side effect of this approach is
> > > +	  a latency especially for the SCHED_OTHER tasks which will not be
> > > +	  able to preempt an offloading kthread. That latency depends on a
> > > +	  number of callbacks to be invoked.
> > > +
> > > +	  Say Y here if you want to set RT priority for offloading kthreads.
> > > +	  Say N here if you are unsure.
> > > +
> > >  config TASKS_TRACE_RCU_READ_MB
> > >  	bool "Tasks Trace RCU readers use memory barriers in user and idle"
> > >  	depends on RCU_EXPERT && TASKS_TRACE_RCU
> > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > index 9dc4c4e82db6..d769a15bc0e3 100644
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -154,7 +154,10 @@ static void sync_sched_exp_online_cleanup(int cpu);
> > >  static void check_cb_ovld_locked(struct rcu_data *rdp, struct rcu_node *rnp);
> > >  static bool rcu_rdp_is_offloaded(struct rcu_data *rdp);
> > >  
> > > -/* rcuc/rcub/rcuop kthread realtime priority */
> > > +/*
> > > + * rcuc/rcub/rcuop kthread realtime priority. The former
> > > + * depends on if CONFIG_RCU_NOCB_CPU_CB_BOOST is set.
> > 
> > Aren't the rcuo[ps] kthreads controlled by the RCU_NOCB_CPU_CB_BOOST
> > Kconfig option?  (As opposed to the "former", which is "rcuc".)
> > 
> The CONFIG_RCU_NOCB_CPU_CB_BOOST controls only the last what is
> the rcuo CB kthread or "rcuo%c/%d" name. Sorry it is not "former"
> it is the last in the rcuc/rcub/rcuop sequence. It was a typo :)

I do know that feeling!  Absolutely not a problem, please just fix it
in the next version.

> > > + */
> > >  static int kthread_prio = IS_ENABLED(CONFIG_RCU_BOOST) ? 1 : 0;
> > >  module_param(kthread_prio, int, 0444);
> > >  
> > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > > index 60cc92cc6655..a2823be9b1d0 100644
> > > --- a/kernel/rcu/tree_nocb.h
> > > +++ b/kernel/rcu/tree_nocb.h
> > > @@ -1315,8 +1315,9 @@ static void rcu_spawn_cpu_nocb_kthread(int cpu)
> > >  	if (WARN_ONCE(IS_ERR(t), "%s: Could not start rcuo CB kthread, OOM is now expected behavior\n", __func__))
> > >  		goto end;
> > >  
> > > -	if (kthread_prio)
> > > +	if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_CB_BOOST))
> > 
> > Don't we need both non-zero kthread_prio and the proper setting of the
> > new Kconfig option before we run it at SCHED_FIFO?
> > 
> > Yes, we could rely on sched_setscheduler_nocheck() erroring out in
> > that case, but that sounds like an accident waiting to happen.
> > 
> As far as i see it is odd, because the "kthread_prio" is verified so
> there is a sanity check to check if the value is correct for SCHED_FIFO
> case and does some adjustment if not. There is sanitize_kthread_prio()
> that does all trick.

Agreed, and like I said, we could rely on sched_setscheduler_nocheck()
erroring out in that case.  But people do sometimes turn error cases
into some other functionality.  Keeping the check of kthread_prio makes
it clear to people reading the code what our intent is and also avoids
strange breakage should someone find a use for SCHED_FIFO priority zero.

So please put the check of kthread_prio back in for the next version.

> Looking at the kthread_prio variable. If it is set all the code that
> takes into account of it switches to SCHED_FIFO class. Maybe rename it
> to something kthread_rt_prio? It might be a bad idea though because of
> former dependencies of distros and so on :)

Where were you when the kthread_prio patch was first submitted?  ;-)

But agreed, last I checked there were some tens of billions of Linux
kernel instances running out there.  If such a change affected only
0.1% of that total, we could be ruining tens of millions of system's
days with such a name change.  There would thus need to be a very good
reason to change the name, and we don't have one.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ