[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZxJBAubok8pc5ek7@arm.com>
Date: Fri, 18 Oct 2024 12:05:38 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: "Okanovic, Haris" <harisokn@...zon.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"rafael@...nel.org" <rafael@...nel.org>,
"sudeep.holla@....com" <sudeep.holla@....com>,
"joao.m.martins@...cle.com" <joao.m.martins@...cle.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"wanpengli@...cent.com" <wanpengli@...cent.com>,
"cl@...two.org" <cl@...two.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"maobibo@...ngson.cn" <maobibo@...ngson.cn>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"misono.tomohiro@...itsu.com" <misono.tomohiro@...itsu.com>,
"daniel.lezcano@...aro.org" <daniel.lezcano@...aro.org>,
"arnd@...db.de" <arnd@...db.de>,
"lenb@...nel.org" <lenb@...nel.org>,
"will@...nel.org" <will@...nel.org>,
"hpa@...or.com" <hpa@...or.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
"bp@...en8.de" <bp@...en8.de>,
"mtosatti@...hat.com" <mtosatti@...hat.com>,
"x86@...nel.org" <x86@...nel.org>,
"mark.rutland@....com" <mark.rutland@....com>
Subject: Re: [PATCH v8 01/11] cpuidle/poll_state: poll via
smp_cond_load_relaxed()
On Thu, Oct 17, 2024 at 03:47:31PM -0700, Ankur Arora wrote:
> Catalin Marinas <catalin.marinas@....com> writes:
> > On Wed, Oct 16, 2024 at 03:13:33PM +0000, Okanovic, Haris wrote:
> >> On Tue, 2024-10-15 at 13:04 +0100, Catalin Marinas wrote:
> >> > On Wed, Sep 25, 2024 at 04:24:15PM -0700, Ankur Arora wrote:
> >> > > + smp_cond_load_relaxed(¤t_thread_info()->flags,
> >> > > + VAL & _TIF_NEED_RESCHED ||
> >> > > + loop_count++ >= POLL_IDLE_RELAX_COUNT);
> >> >
> >> > The above is not guaranteed to make progress if _TIF_NEED_RESCHED is
> >> > never set. With the event stream enabled on arm64, the WFE will
> >> > eventually be woken up, loop_count incremented and the condition would
> >> > become true. However, the smp_cond_load_relaxed() semantics require that
> >> > a different agent updates the variable being waited on, not the waiting
> >> > CPU updating it itself. Also note that the event stream can be disabled
> >> > on arm64 on the kernel command line.
> >>
> >> Alternately could we condition arch_haltpoll_want() on
> >> arch_timer_evtstrm_available(), like v7?
> >
> > No. The problem is about the smp_cond_load_relaxed() semantics - it
> > can't wait on a variable that's only updated in its exit condition. We
> > need a new API for this, especially since we are changing generic code
> > here (even it was arm64 code only, I'd still object to such
> > smp_cond_load_*() constructs).
>
> Right. The problem is that smp_cond_load_relaxed() used in this context
> depends on the event-stream side effect when the interface does not
> encode those semantics anywhere.
>
> So, a smp_cond_load_timeout() like in [1] that continues to depend on
> the event-stream is better because it explicitly accounts for the side
> effect from the timeout.
>
> This would cover both the WFxT and the event-stream case.
Indeed.
> The part I'm a little less sure about is the case where WFxT and the
> event-stream are absent.
>
> As you said earlier, for that case on arm64, we use either short
> __delay() calls or spin in cpu_relax(), both of which are essentially
> the same thing.
Something derived from __delay(), not exactly this function. We can't
use it directly as we also want it to wake up if an event is generated
as a result of a memory write (like the current smp_cond_load().
> Now on x86 cpu_relax() is quite optimal. The spec explicitly recommends
> it and from my measurement a loop doing "while (!cond) cpu_relax()" gets
> an IPC of something like 0.1 or similar.
>
> On my arm64 systems however the same loop gets an IPC of 2. Now this
> likely varies greatly but seems like it would run pretty hot some of
> the time.
For the cpu_relax() fall-back, it wouldn't be any worse than the current
poll_idle() code, though I guess in this instance we'd not enable idle
polling.
I expect the event stream to be on in all production deployments. The
reason we have a way to disable it is for testing. We've had hardware
errata in the past where the event on spin_unlock doesn't cross the
cluster boundary. We'd not notice because of the event stream.
> So maybe the right thing to do would be to keep smp_cond_load_timeout()
> but only allow polling if WFxT or event-stream is enabled. And enhance
> cpuidle_poll_state_init() to fail if the above condition is not met.
We could do this as well. Maybe hide this behind another function like
arch_has_efficient_smp_cond_load_timeout() (well, some shorter name),
checked somewhere in or on the path to cpuidle_poll_state_init(). Well,
it might be simpler to do this in haltpoll_want(), backed by an
arch_haltpoll_want() function.
I assume we want poll_idle() to wake up as soon as a task becomes
available. Otherwise we could have just used udelay() for some fraction
of cpuidle_poll_time() instead of cpu_relax().
--
Catalin
Powered by blists - more mailing lists