[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5366B8C8.3080700@intel.com>
Date: Sun, 04 May 2014 15:01:44 -0700
From: "H. Peter Anvin" <h.peter.anvin@...el.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Ingo Molnar <mingo@...nel.org>,
Andy Lutomirski <luto@...capital.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
the arch/x86 maintainers <x86@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [RFC/HACK] x86: Fast return to kernel
On 05/04/2014 02:31 PM, Linus Torvalds wrote:
> On Sun, May 4, 2014 at 12:59 PM, H. Peter Anvin <h.peter.anvin@...el.com> wrote:
>>
>> Maybe let userspace sit in a tight loop doing RDTSC, and look for data
>> points too far apart to have been uninterrupted?
>
> That won't work, since Andy's patch improves on the "interrupt
> happened in kernel space", not on the user-space interrupt case.
>
I was thinking about your proposal, not Andy's.
> But some variation on that with a kernel module that does something like
>
> - take over one CPU and force tons of timer interrupts on that CPU
> using the local APIC
>
> - for (say) ten billion cycles, do something like this in that kernel module:
>
> #define TEN_BILLION (10000000000)
>
> unsigned long prev = 0, sum = 0, end = rdtsc() + TEN_BILLION;
> for (;;) {
> unsigned long tsc = rdtsc();
> if (tsc > end)
> break;
> if (tsc < prev + 500) {
> sum += tsc - prev;
> }
> prev = tsc;
> }
>
> and see how big a fraction of the 10 billion cycles you capture in
> 'sum'. The bigger the fraction, the less time the timer interrupts
> stole from your CPU.
>
> That "500" is just a random cut-off. Any interrupt will take more than
> that many TSC cycles. So the above basically counts how much
> uninterrupted time that thread gets.
Yes, same idea, but in a kernel module.
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists