lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170719050459.GA30836@in.ibm.com>
Date:   Wed, 19 Jul 2017 10:34:59 +0530
From:   Gautham R Shenoy <ego@...ux.vnet.ibm.com>
To:     Nicholas Piggin <npiggin@...il.com>
Cc:     "Gautham R. Shenoy" <ego@...ux.vnet.ibm.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Michael Neuling <mikey@...ling.org>,
        Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
        Shilpasri G Bhat <shilpa.bhat@...ux.vnet.ibm.com>,
        Akshay Adiga <akshay.adiga@...ux.vnet.ibm.com>,
        linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] powernv/powerpc: Clear PECE1 in LPCR via stop-api
 only on Hotplug

Hello Nicholas,

On Wed, Jul 19, 2017 at 12:14:12PM +1000, Nicholas Piggin wrote:
> Thanks for working on these patches. We really need to get this stuff
> merged and tested asap :)
>

> On Tue, 18 Jul 2017 19:58:49 +0530

[..snip..]

> > diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
> > index 40dae96..5f2a712 100644
> > --- a/arch/powerpc/platforms/powernv/smp.c
> > +++ b/arch/powerpc/platforms/powernv/smp.c
> > @@ -143,7 +143,8 @@ static void pnv_smp_cpu_kill_self(void)
> >  {
> >  	unsigned int cpu;
> >  	unsigned long srr1, wmask;
> > -
> > +	uint64_t lpcr_val;
> > +	uint64_t pir;
> >  	/* Standard hot unplug procedure */
> >  	/*
> >  	 * This hard disables local interurpts, ensuring we have no lazy
> > @@ -164,13 +165,17 @@ static void pnv_smp_cpu_kill_self(void)
> >  	if (cpu_has_feature(CPU_FTR_ARCH_207S))
> >  		wmask = SRR1_WAKEMASK_P8;
> >  
> > +	pir = get_hard_smp_processor_id(cpu);
> >  	/* We don't want to take decrementer interrupts while we are offline,
> >  	 * so clear LPCR:PECE1. We keep PECE2 (and LPCR_PECE_HVEE on P9)
> >  	 * enabled as to let IPIs in.
> >  	 */
> > -	mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1);
> > +	lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
> > +	mtspr(SPRN_LPCR, lpcr_val);
> >  
> >  	while (!generic_check_cpu_restart(cpu)) {
> > +
> > +
> >  		/*
> >  		 * Clear IPI flag, since we don't handle IPIs while
> >  		 * offline, except for those when changing micro-threading
> > @@ -180,8 +185,15 @@ static void pnv_smp_cpu_kill_self(void)
> >  		 */
> >  		kvmppc_set_host_ipi(cpu, 0);
> >  
> > +		/*
> > +		 * If the CPU gets woken up by a special wakeup,
> > +		 * ensure that the SLW engine sets LPCR with
> > +		 * decrementer bit cleared, else we will get spurious
> > +		 * wakeups.
> > +		 */
> > +		if (cpu_has_feature(CPU_FTR_ARCH_300))
> > +			opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
> 
> Can you put these details into pnv_cpu_offline? Possibly even from there
> into another special SPR save function? E.g.,

We could move this to pnv_cpu_offline. Except that if we do get
spurious interrupts, we will end up programming the the LPCR via
stop-api again which is not needed.

Even this patch above can be optimized further. We need to program the
LPCR with the PECE1 bit cleared only once, before the while loop, and
once again program the LPCR with PECE1 bit set once we are out of the
while loop.

But then, perhaps getting spurious interrupts when the CPU is
hotplugged is an unlikely event. So I will move this to pnv_cpu_offline.

> 
>   pnv_save_sprs_for_deep_state_decrementer_wakeup(bool decrementer_wakeup)
> 
> I'd like to put the LPCR manipulation for idle wake settings into idle.c
> as well (pnv_cpu_offline), I think it fits better in there.
>

I agree. Will respin this,test and send out the v2.

> Thanks,

Thanks for the review.
> Nick
> 

--
Thanks and Regards
gautham.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ