[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221124023934.nft3udxelth4lvai@treble>
Date: Wed, 23 Nov 2022 18:39:34 -0800
From: Josh Poimboeuf <jpoimboe@...nel.org>
To: Andrew Cooper <Andrew.Cooper3@...rix.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
"sfr@...b.auug.org.au" <sfr@...b.auug.org.au>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"jgross@...e.com" <jgross@...e.com>,
"sstabellini@...nel.org" <sstabellini@...nel.org>,
"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: objtool warning for next-20221118
On Wed, Nov 23, 2022 at 09:03:40AM -0800, Josh Poimboeuf wrote:
> On Wed, Nov 23, 2022 at 10:52:09AM +0000, Andrew Cooper wrote:
> > > Well, if you return from arch_cpu_idle_dead() you're back in the idle
> > > loop -- exactly where you would be if you were to bootstrap the whole
> > > CPU -- provided you have it remember the whole state (easier with a
> > > vCPU).
>
> play_dead() really needs sane semantics. Not only does it introduce a
> surprise to the offlining code in do_idle(), it also skips the entire
> hotplug state machine. Not sure if that introduces any bugs, but at the
> very least it's subtle and surprising.
>
> > > But maybe I'm missing something, lets add Xen folks on.
> >
> > Calling VCPUOP_down on oneself always succeeds, but all it does is
> > deschedule the vCPU.
> >
> > It can be undone at a later point by a different vcpu issuing VCPUOP_up
> > against the previously-downed CPU, at which point the vCPU gets rescheduled.
> >
> > This is why the VCPUOP_down hypercall returns normally. All state
> > really is intact.
> >
> > As for what Linux does, this is how xen_pv_cpu_up() currently behaves.
> > If you want to make Xen behave more everything else, then bug a BUG()
> > after VCPUOP_down, and adjust xen_pv_cpu_up() to skip its initialised
> > check and always use VCPUOP_initialise to bring the vCPU back online.
>
> Or we could do what sev_es_play_dead() does and just call start_cpu0()
> after the hypercall returns?
Something like so (untested). This is only the x86 bits.
I think I convinced myself that start_cpu0() isn't buggy. I'm looking
at other cleanups, e.g. converging cpu_bringup_and_idle() with
start_secondary().
I can pick it up again next week, post-turkey.
diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index b4dbb20dab1a..e6d1d2810e38 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -93,9 +93,10 @@ static inline void __cpu_die(unsigned int cpu)
smp_ops.cpu_die(cpu);
}
-static inline void play_dead(void)
+static inline void __noreturn play_dead(void)
{
smp_ops.play_dead();
+ BUG();
}
static inline void smp_send_reschedule(int cpu)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 26e8f57c75ad..8e2841deb1eb 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -700,7 +700,7 @@ EXPORT_SYMBOL(boot_option_idle_override);
static void (*x86_idle)(void);
#ifndef CONFIG_SMP
-static inline void play_dead(void)
+static inline void __noreturn play_dead(void)
{
BUG();
}
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 55cad72715d9..d8b12ac1a7c5 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1833,9 +1833,12 @@ void native_play_dead(void)
play_dead_common();
tboot_shutdown(TB_SHUTDOWN_WFS);
- mwait_play_dead(); /* Only returns on failure */
+ mwait_play_dead(); /* Only returns if mwait is not supported */
+
if (cpuidle_play_dead())
hlt_play_dead();
+
+ BUG();
}
#else /* ... !CONFIG_HOTPLUG_CPU */
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
index 480be82e9b7b..30dc904ca990 100644
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -385,17 +385,9 @@ static void xen_pv_play_dead(void) /* used only with HOTPLUG_CPU */
{
play_dead_common();
HYPERVISOR_vcpu_op(VCPUOP_down, xen_vcpu_nr(smp_processor_id()), NULL);
- cpu_bringup();
- /*
- * commit 4b0c0f294 (tick: Cleanup NOHZ per cpu data on cpu down)
- * clears certain data that the cpu_idle loop (which called us
- * and that we return from) expects. The only way to get that
- * data back is to call:
- */
- tick_nohz_idle_enter();
- tick_nohz_idle_stop_tick_protected();
- cpuhp_online_idle(CPUHP_AP_ONLINE_IDLE);
+ /* FIXME: converge cpu_bringup_and_idle() and start_secondary() */
+ cpu_bringup_and_idle();
}
#else /* !CONFIG_HOTPLUG_CPU */
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 314802f98b9d..7fbbd1572288 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -185,7 +185,7 @@ void arch_cpu_idle(void);
void arch_cpu_idle_prepare(void);
void arch_cpu_idle_enter(void);
void arch_cpu_idle_exit(void);
-void arch_cpu_idle_dead(void);
+void __noreturn arch_cpu_idle_dead(void);
int cpu_report_state(int cpu);
int cpu_check_up_prepare(int cpu);
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index f26ab2675f7d..097afe98e53e 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -71,7 +71,7 @@ static noinline int __cpuidle cpu_idle_poll(void)
void __weak arch_cpu_idle_prepare(void) { }
void __weak arch_cpu_idle_enter(void) { }
void __weak arch_cpu_idle_exit(void) { }
-void __weak arch_cpu_idle_dead(void) { }
+void __weak __noreturn arch_cpu_idle_dead(void) { BUG(); }
void __weak arch_cpu_idle(void)
{
cpu_idle_force_poll = 1;
Powered by blists - more mailing lists