[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFzTn9e=F68SOtEvvZLdT6zCj9+gLc-OS7qhDKdM7zaasA@mail.gmail.com>
Date: Thu, 6 Apr 2017 10:31:46 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Will Deacon <will.deacon@....com>,
Nicholas Piggin <npiggin@...il.com>,
David Miller <davem@...emloft.net>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Anton Blanchard <anton@...ba.org>,
linuxppc-dev list <linuxppc-dev@...abs.org>
Subject: Re: [RFC][PATCH] spin loop arch primitives for busy waiting
On Thu, Apr 6, 2017 at 9:36 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>
> Something like the below, which is ugly (because I couldn't be bothered
> to resolve the header recursion and thus duplicates the monitor/mwait
> functions) and broken (because it hard assumes the hardware can do
> monitor/mwait).
Yeah, I think it needs to be conditional not just om mwait support,
but on the "new" mwait support (ie "CPUID.05H:ECX[bit 1] = 1").
And we'd probably want to make it even more strict, in that soem mwait
implementations might simply not be very good for short waits.
Because I think the latency was hundreds of cycles at some point (but
that may have been the original version that wouldn't have had the
"new mwait" bit set), and there are also issues with virtualization
(ie we may not want to do this in a guest because it causes a VM
exit).
> But it builds and boots, no clue if its better or worse. Changing mwait
> eax to 0 would give us C1 and might also be worth a try I suppose.
Hmm. Also:
> + ___mwait(0xf0 /* C0 */, 0x01 /* INT */); \
Do you actually want that "INT" bit set? It's only meaningful if
interrupts are _masked_, afaik. Which doesn't necessarily make sense
for this case.
If interrupts would actually get delivered to software, mwait will
exit regardless.
So I think __mwait(0,0) might be the rigth thing at least in some
cases. Or at least worth timing at some point.
Of course, we really shouldn't have very many places that actually
need this. We currently use it in three places, I think:
- spinlocks. This is likely the the big case.
- the per-cpu cross-cpu calling (call_single_data) exclusivity waiting
- the magical on_cpu waiting in ttwu. I'm not sure how often this
actually triggers, the original argument for this was to avoid an
expensive barrier - the loop itself probably very seldom actually
triggers.
It may be, for example, that just the fact that your implementation
does the "__monitor()" part before doing the load and test, might be
already too expensive for the (common) cases where we don't expect to
loop.
But maybe "monitor" is really cheap. I suspect it's microcoded,
though, which implies "no".
Linus
Powered by blists - more mailing lists