lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZA8WxjUL0eUPtVy8@pc636>
Date:   Mon, 13 Mar 2023 13:27:50 +0100
From:   Uladzislau Rezki <urezki@...il.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     "Paul E. McKenney" <paulmck@...nel.org>,
        Joel Fernandes <joel@...lfernandes.org>,
        Frederic Weisbecker <frederic@...nel.org>,
        linux-kernel@...r.kernel.org, Qiuxu Zhuo <qiuxu.zhuo@...el.com>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        linux-doc@...r.kernel.org, rcu@...r.kernel.org
Subject: Re: [PATCH v3] rcu: Add a minimum time for marking boot as completed

On Mon, Mar 13, 2023 at 10:51:39AM +0100, Uladzislau Rezki wrote:
> On Fri, Mar 10, 2023 at 10:24:34PM -0800, Paul E. McKenney wrote:
> > On Fri, Mar 10, 2023 at 09:55:02AM +0100, Uladzislau Rezki wrote:
> > > On Thu, Mar 09, 2023 at 10:10:56PM +0000, Joel Fernandes wrote:
> > > > On Thu, Mar 09, 2023 at 01:57:42PM +0100, Uladzislau Rezki wrote:
> > > > [..]
> > > > > > > > > > See this commit:
> > > > > > > > > > 
> > > > > > > > > > 3705b88db0d7cc ("rcu: Add a module parameter to force use of
> > > > > > > > > > expedited RCU primitives")
> > > > > > > > > > 
> > > > > > > > > > Antti provided this commit precisely in order to allow Android
> > > > > > > > > > devices to expedite the boot process and to shut off the
> > > > > > > > > > expediting at a time of Android userspace's choosing.  So Android
> > > > > > > > > > has been making this work for about ten years, which strikes me
> > > > > > > > > > as an adequate proof of concept.  ;-)
> > > > > > > > > 
> > > > > > > > > Thanks for the pointer. That's true. Looking at Android sources, I
> > > > > > > > > find that Android Mediatek devices at least are setting
> > > > > > > > > rcu_expedited to 1 at late stage of their userspace boot (which is
> > > > > > > > > weird, it should be set to 1 as early as possible), and
> > > > > > > > > interestingly I cannot find them resetting it back to 0!.  Maybe
> > > > > > > > > they set rcu_normal to 1? But I cannot find that either. Vlad? :P
> > > > > > > > 
> > > > > > > > Interesting.  Though this is consistent with Antti's commit log,
> > > > > > > > where he talks about expediting grace periods but not unexpediting
> > > > > > > > them.
> > > > > > > > 
> > > > > > > Do you think we need to unexpedite it? :))))
> > > > > > 
> > > > > > Android runs on smallish systems, so quite possibly not!
> > > > > > 
> > > > > We keep it enabled and never unexpedite it. The reason is a performance.  I
> > > > > have done some app-launch time analysis with enabling and disabling of it.
> > > > > 
> > > > > An expedited case is much better when it comes to app launch time. It
> > > > > requires ~25% less time to run an app comparing with unexpedited variant.
> > > > > So we have a big gain here.
> > > > 
> > > > Wow, that's huge. I wonder if you can dig deeper and find out why that is so
> > > > as the callbacks may need to be synchronize_rcu_expedited() then, as it could
> > > > be slowing down other usecases! I find it hard to believe, real-time
> > > > workloads will run better without those callbacks being always-expedited if
> > > > it actually gives back 25% in performance!
> > > > 
> > > I can dig further, but on a high level i think there are some spots
> > > which show better performance if expedited is set. I mean synchronize_rcu()
> > > becomes as "less blocking a context" from a time point of view.
> > > 
> > > The problem of a regular synchronize_rcu() is - it can trigger a big latency
> > > delays for a caller. For example for nocb case we do not know where in a list
> > > our callback is located and when it is invoked to unblock a caller.
> > 
> > True, expedited RCU grace periods do not have this callback-invocation
> > delay that normal RCU does.
> > 
> > > I have already mentioned somewhere. Probably it makes sense to directly wake-up
> > > callers from the GP kthread instead and not via nocb-kthread that invokes our callbacks
> > > one by one.
> > 
> > Makes sense, but it is necessary to be careful.  Wakeups are not fast,
> > so making the RCU grace-period kthread do them all sequentially is not
> > a strategy to win.  For example, note that the next expedited grace
> > period can start before the previous expedited grace period has finished
> > its wakeups.
> > 
> I hove done a small and quick prototype:
> 
> <snip>
> diff --git a/include/linux/rcupdate_wait.h b/include/linux/rcupdate_wait.h
> index 699b938358bf..e1a4cca9a208 100644
> --- a/include/linux/rcupdate_wait.h
> +++ b/include/linux/rcupdate_wait.h
> @@ -9,6 +9,8 @@
>  #include <linux/rcupdate.h>
>  #include <linux/completion.h>
> 
> +extern struct llist_head gp_wait_llist;
> +
>  /*
>   * Structure allowing asynchronous waiting on RCU.
>   */
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index ee27a03d7576..50b81ca54104 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -113,6 +113,9 @@ int rcu_num_lvls __read_mostly = RCU_NUM_LVLS;
>  int num_rcu_lvl[] = NUM_RCU_LVL_INIT;
>  int rcu_num_nodes __read_mostly = NUM_RCU_NODES; /* Total # rcu_nodes in use. */
> 
> +/* Waiters for a GP kthread. */
> +LLIST_HEAD(gp_wait_llist);
> +
>  /*
>   * The rcu_scheduler_active variable is initialized to the value
>   * RCU_SCHEDULER_INACTIVE and transitions RCU_SCHEDULER_INIT just before the
> @@ -1776,6 +1779,14 @@ static noinline void rcu_gp_cleanup(void)
>                 on_each_cpu(rcu_strict_gp_boundary, NULL, 0);
>  }
> 
> +static void rcu_notify_gp_end(struct llist_node *llist)
> +{
> +       struct llist_node *rcu, *next;
> +
> +       llist_for_each_safe(rcu, next, llist)
> +               complete(&((struct rcu_synchronize *) rcu)->completion);
> +}
> +
>  /*
>   * Body of kthread that handles grace periods.
>   */
> @@ -1811,6 +1822,9 @@ static int __noreturn rcu_gp_kthread(void *unused)
>                 WRITE_ONCE(rcu_state.gp_state, RCU_GP_CLEANUP);
>                 rcu_gp_cleanup();
>                 WRITE_ONCE(rcu_state.gp_state, RCU_GP_CLEANED);
> +
> +               /* Wake-app all users. */
> +               rcu_notify_gp_end(llist_del_all(&gp_wait_llist));
>         }
>  }
> 
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index 19bf6fa3ee6a..1de7c328a3e5 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -426,7 +426,10 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array,
>                 if (j == i) {
>                         init_rcu_head_on_stack(&rs_array[i].head);
>                         init_completion(&rs_array[i].completion);
> -                       (crcu_array[i])(&rs_array[i].head, wakeme_after_rcu);
> +
> +                       /* Kick a grace period if needed. */
> +                       (void) start_poll_synchronize_rcu();
> +                       llist_add((struct llist_node *) &rs_array[i].head, &gp_wait_llist);
>                 }
>         }
> <snip>
> 
> and did some experiments in terms of performance and comparison. A test case is:
> 
> thread_X:
>   synchronize_rcu();
>   kfree(ptr);
> 
> below are results with running 10 parallel workers running 1000 times of mentioned
> test scenario:
> 
> # default(NOCB)
> [   29.322944] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 17286604 usec
> [   29.325759] All test took worker0=63964052068 cycles
> [   29.327255] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 23414575 usec
> [   29.329974] All test took worker1=86638822563 cycles
> [   29.331460] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 23357988 usec
> [   29.334205] All test took worker2=86429439193 cycles
> [   29.350808] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 17174001 usec
> [   29.353553] All test took worker3=63547397954 cycles
> [   29.355039] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 17141904 usec
> [   29.357770] All test took worker4=63428630877 cycles
> [   29.374831] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 23397952 usec
> [   29.377577] All test took worker5=86577316353 cycles
> [   29.398809] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 17142038 usec
> [   29.401549] All test took worker6=63429124938 cycles
> [   29.414828] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 17158248 usec
> [   29.417574] All test took worker7=63489107118 cycles
> [   29.438811] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 18102109 usec
> [   29.441550] All test took worker8=66981588881 cycles
> [   29.462826] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 23446042 usec
> [   29.465561] All test took worker9=86755258455 cycles
> 
> # patch(NOCB)
> [   14.720986] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837883 usec
> [   14.723753] All test took worker0=32702015768 cycles
> [   14.740386] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837750 usec
> [   14.743076] All test took worker1=32701525814 cycles
> [   14.760350] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837734 usec
> [   14.763036] All test took worker2=32701466281 cycles
> [   14.780369] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837707 usec
> [   14.783057] All test took worker3=32701364901 cycles
> [   14.800352] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837730 usec
> [   14.803041] All test took worker4=32701449927 cycles
> [   14.820355] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837724 usec
> [   14.823048] All test took worker5=32701428134 cycles
> [   14.840359] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837705 usec
> [   14.843052] All test took worker6=32701356465 cycles
> [   14.860322] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837742 usec
> [   14.863005] All test took worker7=32701494475 cycles
> [   14.880363] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837750 usec
> [   14.883081] All test took worker8=32701525074 cycles
> [   14.900362] Summary: kvfree_rcu_1_arg_vmalloc_test loops: 1000 avg: 8837918 usec
> [   14.903065] All test took worker9=32702145379 cycles
> 
> --
> Uladzislau Rezki
A quick app launch test. This is a camera app on our device:

urezki@...36:~/data/yoshino_bin/scripts$ ./test-cam.sh
629
572
652
622
642
650
613
654
607
urezki@...36:~/data/yoshino_bin/scripts$ adb shell
XQ-DQ54:/ $ su
XQ-DQ54:/ # echo 1 > /sy
sys/          system/       system_dlkm/  system_ext/
XQ-DQ54:/ # echo 1 > /sys/kernel/rc
rcu_expedited       rcu_improve_normal  rcu_normal
XQ-DQ54:/ # echo 1 > /sys/kernel/rcu_improve_normal
XQ-DQ54:/ # exit
XQ-DQ54:/ $ exit
urezki@...36:~/data/yoshino_bin/scripts$ ./test-cam.sh
533
549
563
537
540
563
531
549
548
urezki@...36:~/data/yoshino_bin/scripts$

the taken time to run an app in milliseconds.

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ