[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130911151452.5810c793@gandalf.local.home>
Date: Wed, 11 Sep 2013 15:14:52 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc: "H. Peter Anvin" <hpa@...ux.intel.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
Jason Baron <jbaron@...mai.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
boris.ostrovsky@...cle.com, david.vrabel@...rix.com
Subject: Re: Regression :-) Re: [GIT PULL RESEND] x86/jumpmplabel changes
for v3.12-rc1
On Wed, 11 Sep 2013 14:56:54 -0400
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com> wrote:
> > I'm looking to NAK your patch because it is obvious that the jump label
> > code isn't doing what you expect it to be doing. And it wasn't until my
>
> Actually it is OK. They need to be enabled before the SMP code kicks in.
>
> > checks were in place for you to notice.
>
> Any suggestion on how to resolve the crash?
>
> The PV spinlock code is OK (I think, I need to think hard about this) until
> the spinlocks start being used by multiple CPUs. At that point the
> jump_lables have to be in place - otherwise you will end with a spinlock
> going in a slowpath (patched over) and an kicker not using the slowpath
> and never kicking the waiter. Which ends with a hanged system.
Note, a simple early_initcall() could do the trick. SMP isn't set up
until much further in the boot process.
>
> Or simple said - jump labels have to be setup before we boot
> the other CPUs.
Right, and initcalls() can easily serve that purpose.
>
> This would affect the KVM guests as well, I think if the slowpath
> waiter was blocking on the VCPU (which I think it is doing now, but
> not entirely sure?)
>
> P.S.
> I am out on vacation tomorrow for a week. Boris (CC-ed here) can help.
Your patch isn't wrong per say, but I'm hesitant to apply it because it
the result is different depending on whether JUMP_LABEL is configured
or not. Using any jump_label() calls before jump_label_init() is
called, is entering a gray area, and I think it should be avoided.
This patch should solve it for you:
xen: Do not enable spinlocks before jump_label_init()
The static_key paravirt_ticketlocks_enabled does not need to be
initialized before jump_label_init(), as that will cause an
inconsistent result between JUMP_LABEL being configured or not. The
static key update will not take place at the time of the
static_key_slow_inc() but instead at the time of jump_label_init(), if
CONFIG_JUMP_LABEL is configured, otherwise it happens at the time of
the static_key_slow_inc() call.
The updates to the spinlocks need to happen before other processors are
initialized, which happens much later in boot up. A simple use of
early_initcall() will do the trick, as that too is called before other
processors are enabled and after jump_label_init() is called.
Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Signed-off-by: Steven Rostedt <rostedt@...dmis.org>
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 9235842..4214bde 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -279,7 +279,6 @@ static void __init xen_smp_prepare_boot_cpu(void)
xen_filter_cpu_maps();
xen_setup_vcpu_info_placement();
- xen_init_spinlocks();
}
static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 0438b93..52582fd 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -285,25 +285,28 @@ void xen_uninit_lock_cpu(int cpu)
static bool xen_pvspin __initdata = true;
-void __init xen_init_spinlocks(void)
+static __init int xen_init_spinlocks(void)
{
/*
* See git commit f10cd522c5fbfec9ae3cc01967868c9c2401ed23
* (xen: disable PV spinlocks on HVM)
*/
if (xen_hvm_domain())
- return;
+ return 0;
if (!xen_pvspin) {
printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
- return;
+ return 0;
}
static_key_slow_inc(¶virt_ticketlocks_enabled);
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
+
+ return 0;
}
+early_initcall(xen_init_spinlocks);
static __init int xen_parse_nopvspin(char *arg)
{
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 95f8c61..7609eb1 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -72,13 +72,9 @@ static inline void xen_hvm_smp_init(void) {}
#endif
#ifdef CONFIG_PARAVIRT_SPINLOCKS
-void __init xen_init_spinlocks(void);
void xen_init_lock_cpu(int cpu);
void xen_uninit_lock_cpu(int cpu);
#else
-static inline void xen_init_spinlocks(void)
-{
-}
static inline void xen_init_lock_cpu(int cpu)
{
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists