[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170419132213.GA17362@linux.vnet.ibm.com>
Date: Wed, 19 Apr 2017 06:22:13 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Christian Borntraeger <borntraeger@...ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com,
marc.zyngier@....com
Subject: Re: [PATCH v2 tip/core/rcu 0/13] Miscellaneous fixes for 4.12
On Wed, Apr 19, 2017 at 01:48:08PM +0200, Christian Borntraeger wrote:
> On 04/19/2017 01:28 PM, Peter Zijlstra wrote:
> >
> > So the thing Maz complained about is because KVM assumes
> > synchronize_srcu() is 'free' when there is no srcu_read_lock() activity.
> > This series 'breaks' that.
>
> Why is such a behaviour change not mentioned in the cover letter?
> I could not find anything in the patch descriptions that would
> indicate a slowdown. How much slower did it get?
It was an 8x slowdown in boot time of a guest OS running UEFI, from
five seconds to forty seconds. The fix restored the original boot time.
Why didn't I report the slowdown in my cover letter? Because I didn't
realize that I had created such a stupid bug! ;-)
Why didn't my testing reveal the bug? Because in my rcutorture testing,
the buggy code runs about as fast as the original, and the fixed new code
runs about an order of magnitude faster. This is because rcutorture's
performance statistics are mostly sensitive to throughput, while Marc's
boot-time run is mostly sensitive to latency.
> But indeed, there are several places at KVM startup which have been
> reworked to srcu since normal rcu was too slow for several usecases.
> (Mostly registering devices and related data structures at startup,
> basically the qemu/kvm coldplug interaction)
And here is the patch that restored Marc's boot speed. It simply changes
the original (buggy) fixed delay for no delay in the expedited case and
the same fixed delay in the non-expedited case.
Thanx, Paul
------------------------------------------------------------------------
commit 66ae176ab33dd3afa0b944d149fe8240e65743f9
Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Date: Tue Apr 18 10:28:31 2017 -0700
srcu: Expedite srcu_schedule_cbs_snp() callback invocation
Although Tree SRCU does reduce delays when there is at least one
synchronize_srcu_expedited() invocation pending, srcu_schedule_cbs_snp()
still waits for SRCU_INTERVAL before invoking callbacks. Since
synchronize_srcu_expedited() now posts a callback and waits for
that callback to do a wakeup, this destroys the expedited nature of
synchronize_srcu_expedited(). This destruction became apparent to
Marc Zyngier in the guise of a guest-OS bootup slowdown from five
seconds to no fewer than forty seconds.
This commit therefore invokes callbacks immediately at the end of the
grace period when there is at least one synchronize_srcu_expedited()
invocation pending. This brought Marc's guest-OS bootup times back
into the realm of reason.
Reported-by: Marc Zyngier <marc.zyngier@....com>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Tested-by: Marc Zyngier <marc.zyngier@....com>
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index f9c684d79faa..e11b89a363f7 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -442,7 +442,8 @@ static void srcu_schedule_cbs_snp(struct srcu_struct *sp, struct srcu_node *snp)
int cpu;
for (cpu = snp->grplo; cpu <= snp->grphi; cpu++)
- srcu_schedule_cbs_sdp(per_cpu_ptr(sp->sda, cpu), SRCU_INTERVAL);
+ srcu_schedule_cbs_sdp(per_cpu_ptr(sp->sda, cpu),
+ atomic_read(&sp->srcu_exp_cnt) ? 0 : SRCU_INTERVAL);
}
/*
Powered by blists - more mailing lists