[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180709123457.GM3593@linux.vnet.ibm.com>
Date: Mon, 9 Jul 2018 05:34:57 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: David Woodhouse <dwmw2@...radead.org>, mhillenb@...zon.de,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] Make need_resched() return true when rcu_urgent_qs
requested
On Mon, Jul 09, 2018 at 01:06:57PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
>
> > > But either proposal is exactly the same in this respect. The whole
> > > rcu_urgent_qs thing won't be set any earlier either.
> >
> > Er.... Marius, our latencies in expand_fdtable() definitely went from
> > ~10s to well below one second when we just added the rcu_all_qs() into
> > the loop, didn't they? And that does nothing if !rcu_urgent_qs.
>
> Argh I never found that, because obfuscation:
>
> ruqp = per_cpu_ptr(&rcu_dynticks.rcu_urgent_qs, rdp->cpu);
> ...
> smp_store_release(ruqp, true);
>
> I, using git grep "rcu_urgent_qs.*true" only found
> rcu_request_urgent_qs_task() and sync_sched_exp_handler().
Yeah, got tired of typing that long string too many times, so made a
short-named pointer...
> But how come KVM even triggers that case; rcu_implicit_dynticks_qs() is
> for NOHZ and offline CPUs.
Mostly, yes. But it also takes measures when CPUs take too long to
check in.
The reason that David's latencies went from 100ms to one second is
because I made this code less aggressive about invoking resched_cpu().
The reason I did that was to allow cond_resched_rcu_qs() to be used less
without performance regressions. And just plain cond_resched() on
!PREEMPT is intended to handle the faster checks. But KVM defeats
this by checking need_resched() before invoking cond_resched().
For PREEMPT, either the scheduling-clock interrupt sees that there
is no RCU-read-side critical section or we have either idle or
nohz_full userspace execution.
Of course, if there really is a huge RCU read-side critical section that
really does take 15 seconds to execute, there is of course nothing that
RCU can do about that. But as you say later, even a one-second critical
section is huge and needs to be broken up somehow. Which should
introduce (at the very least) a cond_resched() for !PREEMPT or an
rcu_read_unlock() and thus rcu_read_unlock_special() for PREEMPT.
Thanx, Paul
Powered by blists - more mailing lists