lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZZQHCrGNwjooI4kU@pc636>
Date: Tue, 2 Jan 2024 13:52:26 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Uladzislau Rezki <urezki@...il.com>, RCU <rcu@...r.kernel.org>,
	Neeraj upadhyay <Neeraj.Upadhyay@....com>,
	Boqun Feng <boqun.feng@...il.com>, Hillf Danton <hdanton@...a.com>,
	Joel Fernandes <joel@...lfernandes.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
	Frederic Weisbecker <frederic@...nel.org>
Subject: Re: [PATCH v3 4/7] rcu: Improve handling of synchronize_rcu() users

Hello, Paul!

Sorry for late answer, it is because of holidays :)

> > > > The problem is that, we are limited in number of "wait-heads" which we
> > > > add as a marker node for this/current grace period. If there are more clients
> > > > and there is no a wait-head available it means that a system, the deferred
> > > > kworker, is slow in processing callbacks, thus all wait-nodes are in use.
> > > > 
> > > > That is why we need an extra grace period. Basically to repeat our try one
> > > > more time, i.e. it might be that a current grace period is not able to handle
> > > > users due to the fact that a system is doing really slow, but this is rather
> > > > a corner case and is not a problem.
> > > 
> > > But in that case, the real issue is not the need for an extra grace
> > > period, but rather the need for the wakeup processing to happen, correct?
> > > Or am I missing something subtle here?
> > > 
> > Basically, yes. If we had a spare dummy-node we could process the users
> > by the current GP(no need in extra). Why we may not have it - it is because
> > like you pointed:
> > 
> > - wake-up issue, i.e. wake-up time + when we are on_cpu;
> > - slow list process. For example priority. The kworker is not
> >   given enough CPU time to do the progress, thus "dummy-nodes"
> >   are not released in time for reuse.
> > 
> > Therefore, en extra GP is requested if there is a high flow of
> > synchronize_rcu() users and kworker is not able to do a progress
> > in time.
> > 
> > For example 60K+ parallel synchronize_rcu() users will trigger it.
> 
> OK, but what bad thing would happen if that was moved to precede the
> rcu_seq_start(&rcu_state.gp_seq)?  That way, the requested grace period
> would be the same as the one that is just now starting.
> 
> Something like this?
> 
> 	start_new_poll = rcu_sr_normal_gp_init();
> 
> 	/* Record GP times before starting GP, hence rcu_seq_start(). */
> 	rcu_seq_start(&rcu_state.gp_seq);
> 	ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq);
>
I had a concern about the case when rcu_sr_normal_gp_init() handles what
we currently have, in terms of requests. Right after that there is/are
extra sync requests which invoke the start_poll_synchronize_rcu() but
since a GP has been requested before it will not request an extra one. So
"last" incoming users might not be processed.

That is why i have placed the rcu_sr_normal_gp_init() after a gp_seq is
updated.

I can miss something, so please comment. Apart of that we can move it
as you proposed.

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ