lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a3966480-5c2d-42be-96c2-3ac88ecc5963@paulmck-laptop>
Date:   Tue, 14 Mar 2023 06:49:17 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     Joel Fernandes <joel@...lfernandes.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        linux-kernel@...r.kernel.org, Qiuxu Zhuo <qiuxu.zhuo@...el.com>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        linux-doc@...r.kernel.org, rcu@...r.kernel.org
Subject: Re: [PATCH v3] rcu: Add a minimum time for marking boot as completed

On Tue, Mar 14, 2023 at 12:16:51PM +0100, Uladzislau Rezki wrote:
> On Mon, Mar 13, 2023 at 11:56:34AM -0700, Paul E. McKenney wrote:
> > On Mon, Mar 13, 2023 at 07:12:07PM +0100, Uladzislau Rezki wrote:
> > > On Mon, Mar 13, 2023 at 11:49:58AM -0400, Joel Fernandes wrote:
> > > > On Mon, Mar 13, 2023 at 11:32 AM Uladzislau Rezki <urezki@...il.com> wrote:
> > > > > On Mon, Mar 13, 2023 at 06:58:30AM -0700, Joel Fernandes wrote:
> > > > > > > On Mar 13, 2023, at 2:51 AM, Uladzislau Rezki <urezki@...il.com> wrote:
> > > > > > >
> > > > > > > On Fri, Mar 10, 2023 at 10:24:34PM -0800, Paul E. McKenney wrote:
> > > > > > >>> On Fri, Mar 10, 2023 at 09:55:02AM +0100, Uladzislau Rezki wrote:
> > > > > > >>> On Thu, Mar 09, 2023 at 10:10:56PM +0000, Joel Fernandes wrote:
> > > > > > >>>> On Thu, Mar 09, 2023 at 01:57:42PM +0100, Uladzislau Rezki wrote:
> > > > > > >>>> [..]
> > > > > > >>>>>>>>>> See this commit:
> > > > > > >>>>>>>>>>
> > > > > > >>>>>>>>>> 3705b88db0d7cc ("rcu: Add a module parameter to force use of
> > > > > > >>>>>>>>>> expedited RCU primitives")
> > > > > > >>>>>>>>>>
> > > > > > >>>>>>>>>> Antti provided this commit precisely in order to allow Android
> > > > > > >>>>>>>>>> devices to expedite the boot process and to shut off the
> > > > > > >>>>>>>>>> expediting at a time of Android userspace's choosing.  So Android
> > > > > > >>>>>>>>>> has been making this work for about ten years, which strikes me
> > > > > > >>>>>>>>>> as an adequate proof of concept.  ;-)
> > > > > > >>>>>>>>>
> > > > > > >>>>>>>>> Thanks for the pointer. That's true. Looking at Android sources, I
> > > > > > >>>>>>>>> find that Android Mediatek devices at least are setting
> > > > > > >>>>>>>>> rcu_expedited to 1 at late stage of their userspace boot (which is
> > > > > > >>>>>>>>> weird, it should be set to 1 as early as possible), and
> > > > > > >>>>>>>>> interestingly I cannot find them resetting it back to 0!.  Maybe
> > > > > > >>>>>>>>> they set rcu_normal to 1? But I cannot find that either. Vlad? :P
> > > > > > >>>>>>>>
> > > > > > >>>>>>>> Interesting.  Though this is consistent with Antti's commit log,
> > > > > > >>>>>>>> where he talks about expediting grace periods but not unexpediting
> > > > > > >>>>>>>> them.
> > > > > > >>>>>>>>
> > > > > > >>>>>>> Do you think we need to unexpedite it? :))))
> > > > > > >>>>>>
> > > > > > >>>>>> Android runs on smallish systems, so quite possibly not!
> > > > > > >>>>>>
> > > > > > >>>>> We keep it enabled and never unexpedite it. The reason is a performance.  I
> > > > > > >>>>> have done some app-launch time analysis with enabling and disabling of it.
> > > > > > >>>>>
> > > > > > >>>>> An expedited case is much better when it comes to app launch time. It
> > > > > > >>>>> requires ~25% less time to run an app comparing with unexpedited variant.
> > > > > > >>>>> So we have a big gain here.
> > > > > > >>>>
> > > > > > >>>> Wow, that's huge. I wonder if you can dig deeper and find out why that is so
> > > > > > >>>> as the callbacks may need to be synchronize_rcu_expedited() then, as it could
> > > > > > >>>> be slowing down other usecases! I find it hard to believe, real-time
> > > > > > >>>> workloads will run better without those callbacks being always-expedited if
> > > > > > >>>> it actually gives back 25% in performance!
> > > > > > >>>>
> > > > > > >>> I can dig further, but on a high level i think there are some spots
> > > > > > >>> which show better performance if expedited is set. I mean synchronize_rcu()
> > > > > > >>> becomes as "less blocking a context" from a time point of view.
> > > > > > >>>
> > > > > > >>> The problem of a regular synchronize_rcu() is - it can trigger a big latency
> > > > > > >>> delays for a caller. For example for nocb case we do not know where in a list
> > > > > > >>> our callback is located and when it is invoked to unblock a caller.
> > > > > > >>
> > > > > > >> True, expedited RCU grace periods do not have this callback-invocation
> > > > > > >> delay that normal RCU does.
> > > > > > >>
> > > > > > >>> I have already mentioned somewhere. Probably it makes sense to directly wake-up
> > > > > > >>> callers from the GP kthread instead and not via nocb-kthread that invokes our callbacks
> > > > > > >>> one by one.
> > > > > > >>
> > > > > > >> Makes sense, but it is necessary to be careful.  Wakeups are not fast,
> > > > > > >> so making the RCU grace-period kthread do them all sequentially is not
> > > > > > >> a strategy to win.  For example, note that the next expedited grace
> > > > > > >> period can start before the previous expedited grace period has finished
> > > > > > >> its wakeups.
> > > > > > >>
> > > > > > > I hove done a small and quick prototype:
> > > > > > >
> > > > > > > <snip>
> > > > > > > diff --git a/include/linux/rcupdate_wait.h b/include/linux/rcupdate_wait.h
> > > > > > > index 699b938358bf..e1a4cca9a208 100644
> > > > > > > --- a/include/linux/rcupdate_wait.h
> > > > > > > +++ b/include/linux/rcupdate_wait.h
> > > > > > > @@ -9,6 +9,8 @@
> > > > > > > #include <linux/rcupdate.h>
> > > > > > > #include <linux/completion.h>
> > > > > > >
> > > > > > > +extern struct llist_head gp_wait_llist;
> > > > > > > +
> > > > > > > /*
> > > > > > >  * Structure allowing asynchronous waiting on RCU.
> > > > > > >  */
> > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > > > > > index ee27a03d7576..50b81ca54104 100644
> > > > > > > --- a/kernel/rcu/tree.c
> > > > > > > +++ b/kernel/rcu/tree.c
> > > > > > > @@ -113,6 +113,9 @@ int rcu_num_lvls __read_mostly = RCU_NUM_LVLS;
> > > > > > > int num_rcu_lvl[] = NUM_RCU_LVL_INIT;
> > > > > > > int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */
> > > > > > >
> > > > > > > +/* Waiters for a GP kthread. */
> > > > > > > +LLIST_HEAD(gp_wait_llist);
> > 
> > This being a single global will of course fail due to memory contention
> > on large systems.  So a patch that is ready for mainline must either
> > have per-rcu_node-structure lists or similar.
> > 
> I agree. This is a prototype and the aim is a proof of concept :)
> On bigger systems gp can starve if it wake-ups a lot of users.
> 
> At lease i see that a camera-app improves in terms of launch time.
> It is around 12% percent.

Understood and agreed, lack of scalablity is OK for a prototype
for testing purposes.

> > > > > > > /*
> > > > > > >  * The rcu_scheduler_active variable is initialized to the value
> > > > > > >  * RCU_SCHEDULER_INACTIVE and transitions RCU_SCHEDULER_INIT just before the
> > > > > > > @@ -1776,6 +1779,14 @@ static noinline void rcu_gp_cleanup(void)
> > > > > > >                on_each_cpu(rcu_strict_gp_boundary, NULL, 0);
> > > > > > > }
> > > > > > >
> > > > > > > +static void rcu_notify_gp_end(struct llist_node *llist)
> > 
> > And calling this directly from rcu_gp_kthread() is a no-go for large
> > systems because the large number of wakeups will CPU-bound that kthread.
> > Also, it would be better to invoke this from rcu_gp_cleanup().
> > 
> > One option would be to do the wakeups from a workqueue handler.
> > 
> > You might also want to have an array of lists indexed by the bottom few
> > bits of the RCU grace-period sequence number.  This would reduce the
> > number of spurious wakeups.
> > 
> > > > > > > +{
> > > > > > > +       struct llist_node *rcu, *next;
> > > > > > > +
> > > > > > > +       llist_for_each_safe(rcu, next, llist)
> > > > > > > +               complete(&((struct rcu_synchronize *) rcu)->completion);
> > 
> > If you don't eliminate spurious wakeups, it is necessary to do something
> > like checking poll_state_synchronize_rcu() reject those wakeups.
> > 
> OK.
> 
> I will come up with some data and figures soon.

Sounds good!

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ