[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150501162109.GA1091@gmail.com>
Date: Fri, 1 May 2015 18:21:09 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Andy Lutomirski <luto@...capital.net>
Cc: Rik van Riel <riel@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
X86 ML <x86@...nel.org>, williams@...hat.com,
Andrew Lutomirski <luto@...nel.org>, fweisbec@...hat.com,
Peter Zijlstra <peterz@...radead.org>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH 3/3] context_tracking,x86: remove extraneous irq disable
& enable from context tracking on syscall entry
* Andy Lutomirski <luto@...capital.net> wrote:
> > So what's the point? Why not remove this big source of overhead
> > altogether?
>
> The last time I asked, the impression I got was that we needed two
> things:
>
> 1. We can't pluck things from the RCU list without knowing whether
> the CPU is in an RCU read-side critical section, and we can't know
> that unless we have regular grade periods or we know that the CPU is
> idle. To make the CPU detectably idle, we need to set a bit
> somewhere.
'Idle' as in 'executing pure user-space mode, without entering the
kernel and possibly doing an rcu_read_lock()', right?
So we don't have to test it from the remote CPU: we could probe such
CPUs via a single low-overhead IPI. I'd much rather push such overhead
to sync_rcu() than to the syscall entry code!
I can understand people running hard-RT workloads not wanting to see
the overhead of a timer tick or a scheduler tick with variable (and
occasionally heavy) work done in IRQ context, but the jitter caused by
a single trivial IPI with constant work should be very, very low and
constant.
If user-space RT code does not tolerate _that_ kind of latencies then
it really has its priorities wrong and we should not try to please it.
It should not hurt the other 99.9% of sane hard-RT users.
And the other usecase, virtualization, obviously does not care and
could take the IPI just fine.
> 2. To suppress the timing tick, we need to get some timing for, um,
> the scheduler? I wasn't really sure about this one.
So we have variable timeslice timers for the scheduler implemented,
they are off by default but they worked last someone tried them. See
the 'HRTICK' scheduler feature.
And for SCHED_FIFO that timeout can be 'never' - i.e. essentially
stopping the scheduler tick. (within reason.)
> Could we reduce the overhead by making the IN_USER vs IN_KERNEL
> indication be a single bit and, worst case, an rdtsc and maybe a
> subtraction? We could probably get away with banning full nohz on
> non-invariant tsc systems.
>
> (I do understand why it would be tricky to transition from IN_USER
> to IN_KERNEL with IRQs on. Solvable, maybe, but tricky.)
We can make it literally zero overhead: by using an IPI from
synchronize_rcu() and friend.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists