[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5543A94B.3020108@redhat.com>
Date: Fri, 01 May 2015 12:26:51 -0400
From: Rik van Riel <riel@...hat.com>
To: Ingo Molnar <mingo@...nel.org>,
Andy Lutomirski <luto@...capital.net>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
X86 ML <x86@...nel.org>, williams@...hat.com,
Andrew Lutomirski <luto@...nel.org>, fweisbec@...hat.com,
Peter Zijlstra <peterz@...radead.org>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH 3/3] context_tracking,x86: remove extraneous irq disable
& enable from context tracking on syscall entry
On 05/01/2015 12:21 PM, Ingo Molnar wrote:
>
> * Andy Lutomirski <luto@...capital.net> wrote:
>
>>> So what's the point? Why not remove this big source of overhead
>>> altogether?
>>
>> The last time I asked, the impression I got was that we needed two
>> things:
>>
>> 1. We can't pluck things from the RCU list without knowing whether
>> the CPU is in an RCU read-side critical section, and we can't know
>> that unless we have regular grade periods or we know that the CPU is
>> idle. To make the CPU detectably idle, we need to set a bit
>> somewhere.
>
> 'Idle' as in 'executing pure user-space mode, without entering the
> kernel and possibly doing an rcu_read_lock()', right?
>
> So we don't have to test it from the remote CPU: we could probe such
> CPUs via a single low-overhead IPI. I'd much rather push such overhead
> to sync_rcu() than to the syscall entry code!
>
> I can understand people running hard-RT workloads not wanting to see
> the overhead of a timer tick or a scheduler tick with variable (and
> occasionally heavy) work done in IRQ context, but the jitter caused by
> a single trivial IPI with constant work should be very, very low and
> constant.
Not if the realtime workload is running inside a KVM
guest.
At that point an IPI, either on the host or in the
guest, involves a full VMEXIT & VMENTER cycle.
I suspect it would be easy enough at user_enter() or
guest_enter() time to:
1) flip the bit
2) check to see if we have any callbacks queued on our on queue, and
3) if so, move the callbacks from that queue to the "another CPU can
take these" queue
At that point, another CPU wanting to do an RCU grace period
advance can:
1) skip trying to advance the grace period on the CPU that
is in userspace or guest mode, and
2) take the "for somebody else" callbacks of that CPU, and
run them
This does not seem overly complicated.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists