[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45a15103-0302-4e7d-b522-e17e8b8ac927@paulmck-laptop>
Date: Wed, 3 Jan 2024 06:47:30 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Uladzislau Rezki <urezki@...il.com>
Cc: RCU <rcu@...r.kernel.org>, Neeraj upadhyay <Neeraj.Upadhyay@....com>,
Boqun Feng <boqun.feng@...il.com>, Hillf Danton <hdanton@...a.com>,
Joel Fernandes <joel@...lfernandes.org>,
LKML <linux-kernel@...r.kernel.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
Frederic Weisbecker <frederic@...nel.org>
Subject: Re: [PATCH v3 4/7] rcu: Improve handling of synchronize_rcu() users
On Wed, Jan 03, 2024 at 02:16:00PM +0100, Uladzislau Rezki wrote:
> On Tue, Jan 02, 2024 at 11:25:13AM -0800, Paul E. McKenney wrote:
> > On Tue, Jan 02, 2024 at 01:52:26PM +0100, Uladzislau Rezki wrote:
> > > Hello, Paul!
> > >
> > > Sorry for late answer, it is because of holidays :)
> > >
> > > > > > > The problem is that, we are limited in number of "wait-heads" which we
> > > > > > > add as a marker node for this/current grace period. If there are more clients
> > > > > > > and there is no a wait-head available it means that a system, the deferred
> > > > > > > kworker, is slow in processing callbacks, thus all wait-nodes are in use.
> > > > > > >
> > > > > > > That is why we need an extra grace period. Basically to repeat our try one
> > > > > > > more time, i.e. it might be that a current grace period is not able to handle
> > > > > > > users due to the fact that a system is doing really slow, but this is rather
> > > > > > > a corner case and is not a problem.
> > > > > >
> > > > > > But in that case, the real issue is not the need for an extra grace
> > > > > > period, but rather the need for the wakeup processing to happen, correct?
> > > > > > Or am I missing something subtle here?
> > > > > >
> > > > > Basically, yes. If we had a spare dummy-node we could process the users
> > > > > by the current GP(no need in extra). Why we may not have it - it is because
> > > > > like you pointed:
> > > > >
> > > > > - wake-up issue, i.e. wake-up time + when we are on_cpu;
> > > > > - slow list process. For example priority. The kworker is not
> > > > > given enough CPU time to do the progress, thus "dummy-nodes"
> > > > > are not released in time for reuse.
> > > > >
> > > > > Therefore, en extra GP is requested if there is a high flow of
> > > > > synchronize_rcu() users and kworker is not able to do a progress
> > > > > in time.
> > > > >
> > > > > For example 60K+ parallel synchronize_rcu() users will trigger it.
> > > >
> > > > OK, but what bad thing would happen if that was moved to precede the
> > > > rcu_seq_start(&rcu_state.gp_seq)? That way, the requested grace period
> > > > would be the same as the one that is just now starting.
> > > >
> > > > Something like this?
> > > >
> > > > start_new_poll = rcu_sr_normal_gp_init();
> > > >
> > > > /* Record GP times before starting GP, hence rcu_seq_start(). */
> > > > rcu_seq_start(&rcu_state.gp_seq);
> > > > ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq);
> > > >
> > > I had a concern about the case when rcu_sr_normal_gp_init() handles what
> > > we currently have, in terms of requests. Right after that there is/are
> > > extra sync requests which invoke the start_poll_synchronize_rcu() but
> > > since a GP has been requested before it will not request an extra one. So
> > > "last" incoming users might not be processed.
> > >
> > > That is why i have placed the rcu_sr_normal_gp_init() after a gp_seq is
> > > updated.
> > >
> > > I can miss something, so please comment. Apart of that we can move it
> > > as you proposed.
> >
> > Couldn't that possibility be handled by a check in rcu_gp_cleanup()?
> >
> It is controlled by the caller anyway, i.e. if a new GP is needed.
>
> I am not 100% sure it is as straightforward as it could look like to
> handle it in the rcu_sr_normal_gp_cleaup() function. At least i see
> that we need to access to the first element of llist and find out if
> it is a wait-dummy-head or not. If not we know there are extra incoming
> calls.
>
> So that way requires extra calling of start_poll_synchronize_rcu().
If this is invoked early enough in rcu_gp_cleanup(), all that needs to
happen is to set the need_gp flag. Plus you can count the number of
requests, and snapshot that number at rcu_gp_init() time and check to
see if it changed at rcu_gp_cleanup() time. Later on, this could be
used to reduce the number of wakeups, correct?
> I can add a comment about your concern and we can find the best approach
> later, if it is OK with you!
I agree that this should be added via a later patch, though I have not
yet given up on the possibility that this patch might be simple enough
to be later in this same series.
Thanx, Paul
Powered by blists - more mailing lists