[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170719152032.7724085f@roar.ozlabs.ibm.com>
Date: Wed, 19 Jul 2017 15:20:32 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: Gautham R Shenoy <ego@...ux.vnet.ibm.com>
Cc: Michael Ellerman <mpe@...erman.id.au>,
Michael Neuling <mikey@...ling.org>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Shilpasri G Bhat <shilpa.bhat@...ux.vnet.ibm.com>,
Akshay Adiga <akshay.adiga@...ux.vnet.ibm.com>,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] powernv/powerpc: Clear PECE1 in LPCR via stop-api
only on Hotplug
On Wed, 19 Jul 2017 10:34:59 +0530
Gautham R Shenoy <ego@...ux.vnet.ibm.com> wrote:
> Hello Nicholas,
>
> On Wed, Jul 19, 2017 at 12:14:12PM +1000, Nicholas Piggin wrote:
> > Thanks for working on these patches. We really need to get this stuff
> > merged and tested asap :)
> >
>
> > On Tue, 18 Jul 2017 19:58:49 +0530
>
> [..snip..]
>
> > > diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
> > > index 40dae96..5f2a712 100644
> > > --- a/arch/powerpc/platforms/powernv/smp.c
> > > +++ b/arch/powerpc/platforms/powernv/smp.c
> > > @@ -143,7 +143,8 @@ static void pnv_smp_cpu_kill_self(void)
> > > {
> > > unsigned int cpu;
> > > unsigned long srr1, wmask;
> > > -
> > > + uint64_t lpcr_val;
> > > + uint64_t pir;
> > > /* Standard hot unplug procedure */
> > > /*
> > > * This hard disables local interurpts, ensuring we have no lazy
> > > @@ -164,13 +165,17 @@ static void pnv_smp_cpu_kill_self(void)
> > > if (cpu_has_feature(CPU_FTR_ARCH_207S))
> > > wmask = SRR1_WAKEMASK_P8;
> > >
> > > + pir = get_hard_smp_processor_id(cpu);
> > > /* We don't want to take decrementer interrupts while we are offline,
> > > * so clear LPCR:PECE1. We keep PECE2 (and LPCR_PECE_HVEE on P9)
> > > * enabled as to let IPIs in.
> > > */
> > > - mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1);
> > > + lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
> > > + mtspr(SPRN_LPCR, lpcr_val);
> > >
> > > while (!generic_check_cpu_restart(cpu)) {
> > > +
> > > +
> > > /*
> > > * Clear IPI flag, since we don't handle IPIs while
> > > * offline, except for those when changing micro-threading
> > > @@ -180,8 +185,15 @@ static void pnv_smp_cpu_kill_self(void)
> > > */
> > > kvmppc_set_host_ipi(cpu, 0);
> > >
> > > + /*
> > > + * If the CPU gets woken up by a special wakeup,
> > > + * ensure that the SLW engine sets LPCR with
> > > + * decrementer bit cleared, else we will get spurious
> > > + * wakeups.
> > > + */
> > > + if (cpu_has_feature(CPU_FTR_ARCH_300))
> > > + opal_slw_set_reg(pir, SPRN_LPCR, lpcr_val);
> >
> > Can you put these details into pnv_cpu_offline? Possibly even from there
> > into another special SPR save function? E.g.,
>
> We could move this to pnv_cpu_offline. Except that if we do get
> spurious interrupts, we will end up programming the the LPCR via
> stop-api again which is not needed.
>
> Even this patch above can be optimized further. We need to program the
> LPCR with the PECE1 bit cleared only once, before the while loop, and
> once again program the LPCR with PECE1 bit set once we are out of the
> while loop.
>
> But then, perhaps getting spurious interrupts when the CPU is
> hotplugged is an unlikely event. So I will move this to pnv_cpu_offline.
Yeah, I think it just tidies it up a bit. You're right, but as you say
it's not really a critical path.
> > pnv_save_sprs_for_deep_state_decrementer_wakeup(bool decrementer_wakeup)
> >
> > I'd like to put the LPCR manipulation for idle wake settings into idle.c
> > as well (pnv_cpu_offline), I think it fits better in there.
> >
>
> I agree. Will respin this,test and send out the v2.
Great thanks. The first patch seemed okay to me.
I wonder if we should think about a more structured kernel API for
modifying these kind of system registers so we always have the
up-to-date values stored in memory. Many of them do need to be
restored after sleep, but they don't need to be saved per-thread
or saved every time we go to sleep.
That's far more work of course, but it might be something we want
to think about for the future.
Thanks,
Nick
Powered by blists - more mailing lists