lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Nov 2022 06:28:30 +0100
From:   Juergen Gross <jgross@...e.com>
To:     Josh Poimboeuf <jpoimboe@...nel.org>,
        Andrew Cooper <Andrew.Cooper3@...rix.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        "sfr@...b.auug.org.au" <sfr@...b.auug.org.au>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "sstabellini@...nel.org" <sstabellini@...nel.org>,
        "boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
        "xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: objtool warning for next-20221118

On 24.11.22 03:39, Josh Poimboeuf wrote:
> On Wed, Nov 23, 2022 at 09:03:40AM -0800, Josh Poimboeuf wrote:
>> On Wed, Nov 23, 2022 at 10:52:09AM +0000, Andrew Cooper wrote:
>>>> Well, if you return from arch_cpu_idle_dead() you're back in the idle
>>>> loop -- exactly where you would be if you were to bootstrap the whole
>>>> CPU -- provided you have it remember the whole state (easier with a
>>>> vCPU).
>>
>> play_dead() really needs sane semantics.  Not only does it introduce a
>> surprise to the offlining code in do_idle(), it also skips the entire
>> hotplug state machine.  Not sure if that introduces any bugs, but at the
>> very least it's subtle and surprising.
>>
>>>> But maybe I'm missing something, lets add Xen folks on.
>>>
>>> Calling VCPUOP_down on oneself always succeeds, but all it does is
>>> deschedule the vCPU.
>>>
>>> It can be undone at a later point by a different vcpu issuing VCPUOP_up
>>> against the previously-downed CPU, at which point the vCPU gets rescheduled.
>>>
>>> This is why the VCPUOP_down hypercall returns normally.  All state
>>> really is intact.
>>>
>>> As for what Linux does, this is how xen_pv_cpu_up() currently behaves.
>>> If you want to make Xen behave more everything else, then bug a BUG()
>>> after VCPUOP_down, and adjust xen_pv_cpu_up() to skip its initialised
>>> check and always use VCPUOP_initialise to bring the vCPU back online.
>>
>> Or we could do what sev_es_play_dead() does and just call start_cpu0()
>> after the hypercall returns?
> 
> Something like so (untested).  This is only the x86 bits.
> 
> I think I convinced myself that start_cpu0() isn't buggy.  I'm looking
> at other cleanups, e.g. converging cpu_bringup_and_idle() with
> start_secondary().
> 
> I can pick it up again next week, post-turkey.
> 
> diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
> index b4dbb20dab1a..e6d1d2810e38 100644
> --- a/arch/x86/include/asm/smp.h
> +++ b/arch/x86/include/asm/smp.h
> @@ -93,9 +93,10 @@ static inline void __cpu_die(unsigned int cpu)
>   	smp_ops.cpu_die(cpu);
>   }
>   
> -static inline void play_dead(void)
> +static inline void __noreturn play_dead(void)
>   {
>   	smp_ops.play_dead();
> +	BUG();
>   }
>   
>   static inline void smp_send_reschedule(int cpu)
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 26e8f57c75ad..8e2841deb1eb 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -700,7 +700,7 @@ EXPORT_SYMBOL(boot_option_idle_override);
>   static void (*x86_idle)(void);
>   
>   #ifndef CONFIG_SMP
> -static inline void play_dead(void)
> +static inline void __noreturn play_dead(void)
>   {
>   	BUG();
>   }
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 55cad72715d9..d8b12ac1a7c5 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -1833,9 +1833,12 @@ void native_play_dead(void)
>   	play_dead_common();
>   	tboot_shutdown(TB_SHUTDOWN_WFS);
>   
> -	mwait_play_dead();	/* Only returns on failure */
> +	mwait_play_dead();	/* Only returns if mwait is not supported */
> +
>   	if (cpuidle_play_dead())
>   		hlt_play_dead();
> +
> +	BUG();
>   }
>   
>   #else /* ... !CONFIG_HOTPLUG_CPU */
> diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
> index 480be82e9b7b..30dc904ca990 100644
> --- a/arch/x86/xen/smp_pv.c
> +++ b/arch/x86/xen/smp_pv.c
> @@ -385,17 +385,9 @@ static void xen_pv_play_dead(void) /* used only with HOTPLUG_CPU */
>   {
>   	play_dead_common();
>   	HYPERVISOR_vcpu_op(VCPUOP_down, xen_vcpu_nr(smp_processor_id()), NULL);
> -	cpu_bringup();
> -	/*
> -	 * commit 4b0c0f294 (tick: Cleanup NOHZ per cpu data on cpu down)
> -	 * clears certain data that the cpu_idle loop (which called us
> -	 * and that we return from) expects. The only way to get that
> -	 * data back is to call:
> -	 */
> -	tick_nohz_idle_enter();
> -	tick_nohz_idle_stop_tick_protected();
>   
> -	cpuhp_online_idle(CPUHP_AP_ONLINE_IDLE);
> +	/* FIXME: converge cpu_bringup_and_idle() and start_secondary() */
> +	cpu_bringup_and_idle();

I think this will leak stack memory. Multiple cpu offline/online cycles of
the same cpu will finally exhaust the idle stack.


Juergen

Download attachment "OpenPGP_0xB0DE9DD628BF132F.asc" of type "application/pgp-keys" (3099 bytes)

Download attachment "OpenPGP_signature" of type "application/pgp-signature" (496 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ