lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZZWvbkhFA_Mbw_wH@pc636>
Date: Wed, 3 Jan 2024 20:03:10 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: "Paul E. McKenney" <paulmck@...nel.org>, RCU <rcu@...r.kernel.org>,
	Neeraj upadhyay <Neeraj.Upadhyay@....com>,
	Boqun Feng <boqun.feng@...il.com>, Hillf Danton <hdanton@...a.com>,
	Joel Fernandes <joel@...lfernandes.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
	Frederic Weisbecker <frederic@...nel.org>
Subject: Re: [PATCH v3 4/7] rcu: Improve handling of synchronize_rcu() users

On Wed, Jan 03, 2024 at 08:02:00PM +0100, Uladzislau Rezki wrote:
> On Wed, Jan 03, 2024 at 09:56:42AM -0800, Paul E. McKenney wrote:
> > On Wed, Jan 03, 2024 at 06:35:20PM +0100, Uladzislau Rezki wrote:
> > > On Wed, Jan 03, 2024 at 06:47:30AM -0800, Paul E. McKenney wrote:
> > > > On Wed, Jan 03, 2024 at 02:16:00PM +0100, Uladzislau Rezki wrote:
> > > > > On Tue, Jan 02, 2024 at 11:25:13AM -0800, Paul E. McKenney wrote:
> > > > > > On Tue, Jan 02, 2024 at 01:52:26PM +0100, Uladzislau Rezki wrote:
> > > > > > > Hello, Paul!
> > > > > > > 
> > > > > > > Sorry for late answer, it is because of holidays :)
> > > > > > > 
> > > > > > > > > > > The problem is that, we are limited in number of "wait-heads" which we
> > > > > > > > > > > add as a marker node for this/current grace period. If there are more clients
> > > > > > > > > > > and there is no a wait-head available it means that a system, the deferred
> > > > > > > > > > > kworker, is slow in processing callbacks, thus all wait-nodes are in use.
> > > > > > > > > > > 
> > > > > > > > > > > That is why we need an extra grace period. Basically to repeat our try one
> > > > > > > > > > > more time, i.e. it might be that a current grace period is not able to handle
> > > > > > > > > > > users due to the fact that a system is doing really slow, but this is rather
> > > > > > > > > > > a corner case and is not a problem.
> > > > > > > > > > 
> > > > > > > > > > But in that case, the real issue is not the need for an extra grace
> > > > > > > > > > period, but rather the need for the wakeup processing to happen, correct?
> > > > > > > > > > Or am I missing something subtle here?
> > > > > > > > > > 
> > > > > > > > > Basically, yes. If we had a spare dummy-node we could process the users
> > > > > > > > > by the current GP(no need in extra). Why we may not have it - it is because
> > > > > > > > > like you pointed:
> > > > > > > > > 
> > > > > > > > > - wake-up issue, i.e. wake-up time + when we are on_cpu;
> > > > > > > > > - slow list process. For example priority. The kworker is not
> > > > > > > > >   given enough CPU time to do the progress, thus "dummy-nodes"
> > > > > > > > >   are not released in time for reuse.
> > > > > > > > > 
> > > > > > > > > Therefore, en extra GP is requested if there is a high flow of
> > > > > > > > > synchronize_rcu() users and kworker is not able to do a progress
> > > > > > > > > in time.
> > > > > > > > > 
> > > > > > > > > For example 60K+ parallel synchronize_rcu() users will trigger it.
> > > > > > > > 
> > > > > > > > OK, but what bad thing would happen if that was moved to precede the
> > > > > > > > rcu_seq_start(&rcu_state.gp_seq)?  That way, the requested grace period
> > > > > > > > would be the same as the one that is just now starting.
> > > > > > > > 
> > > > > > > > Something like this?
> > > > > > > > 
> > > > > > > > 	start_new_poll = rcu_sr_normal_gp_init();
> > > > > > > > 
> > > > > > > > 	/* Record GP times before starting GP, hence rcu_seq_start(). */
> > > > > > > > 	rcu_seq_start(&rcu_state.gp_seq);
> > > > > > > > 	ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq);
> > > > > > > >
> > > > > > > I had a concern about the case when rcu_sr_normal_gp_init() handles what
> > > > > > > we currently have, in terms of requests. Right after that there is/are
> > > > > > > extra sync requests which invoke the start_poll_synchronize_rcu() but
> > > > > > > since a GP has been requested before it will not request an extra one. So
> > > > > > > "last" incoming users might not be processed.
> > > > > > > 
> > > > > > > That is why i have placed the rcu_sr_normal_gp_init() after a gp_seq is
> > > > > > > updated.
> > > > > > > 
> > > > > > > I can miss something, so please comment. Apart of that we can move it
> > > > > > > as you proposed.
> > > > > > 
> > > > > > Couldn't that possibility be handled by a check in rcu_gp_cleanup()?
> > > > > > 
> > > > > It is controlled by the caller anyway, i.e. if a new GP is needed.
> > > > > 
> > > > > I am not 100% sure it is as straightforward as it could look like to
> > > > > handle it in the rcu_sr_normal_gp_cleaup() function. At least i see
> > > > > that we need to access to the first element of llist and find out if
> > > > > it is a wait-dummy-head or not. If not we know there are extra incoming
> > > > > calls.
> > > > > 
> > > > > So that way requires extra calling of start_poll_synchronize_rcu().
> > > > 
> > > > If this is invoked early enough in rcu_gp_cleanup(), all that needs to
> > > > happen is to set the need_gp flag.  Plus you can count the number of
> > > > requests, and snapshot that number at rcu_gp_init() time and check to
> > > > see if it changed at rcu_gp_cleanup() time.  Later on, this could be
> > > > used to reduce the number of wakeups, correct?
> > > > 
> > > You mean instead of waking-up a gp-kthread just continue processing of
> > > new users if they are exist? If so, i think, we can implement it as separate
> > > patches.
> > 
> > Agreed, this is an optimization, and thus should be a separate patch.
> > 
> > > > > I can add a comment about your concern and we can find the best approach
> > > > > later, if it is OK with you!
> > > > 
> > > > I agree that this should be added via a later patch, though I have not
> > > > yet given up on the possibility that this patch might be simple enough
> > > > to be later in this same series.
> > > > 
> > > Maybe there is a small misunderstanding. Please note, the rcu_sr_normal_gp_init() 
> > > function does not request any new gp, i.e. our approach does not do any extra GP
> > > requests. It happens only if there are no any dummy-wait-head available as we
> > > discussed it earlier.
> > 
> > The start_poll_synchronize_rcu() added by your patch 4/7 will request
> > an additional grace period because it is invoked after rcu_seq_start()
> > is called, correct?  Or am I missing something subtle here?
> > 
> <snip>
> +       // New poll request after rnp unlock
> +       if (start_new_poll)
> +               (void) start_poll_synchronize_rcu();
> +
> <snip>
> 
> The "start_new_poll" is set to "true" only when _this_ GP is not able
> to handle anything and there are outstanding users. It happens when the
> rcu_sr_normal_gp_init() function was not able to insert a dummy separator
> to the llist, because there were no left dummy-nodes(fixed number of them)
> due to the fact that all of them are "in-use". The reason why there are no
> dummy-nodes is because of slow progress because it is done by dedicated
> kworker.
> 
> I can trigger it, i mean when we need an addition GP, start_new_pool is 1,
> only when i run 20 000 processes concurrently in a tight loop:
> 
> <snip>
> while (1)
>   synchronize_rcu();
> <snip>
> 
> in that scenario we start to ask for an addition GP because we are not up
> to speed, i.e. a system is slow in processing callbacks and we need some
> time until wait-node/nodes is/are released for reuse.
> 
> We need a next GP to move it forward, i.e. to repeat a try of attaching
> a dummy-node.
> 
Probably i should add a comment about it :)

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ