[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1611071753040.3709@nanos>
Date: Mon, 7 Nov 2016 17:55:47 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Chris Metcalf <cmetcalf@...lanox.com>
cc: Gilad Ben Yossef <giladb@...lanox.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>, Tejun Heo <tj@...nel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Christoph Lameter <cl@...ux.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
Andy Lutomirski <luto@...capital.net>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Francis Giraldeau <francis.giraldeau@...il.com>,
Andi Kleen <andi@...stfloor.org>,
Arnd Bergmann <arnd@...db.de>, linux-kernel@...r.kernel.org
Subject: Re: task isolation discussion at Linux Plumbers
On Sat, 5 Nov 2016, Chris Metcalf wrote:
> == Remote statistics ==
>
> We discussed the possibility of remote statistics gathering, i.e. load
> average etc. The idea would be that we could have housekeeping
> core(s) periodically iterate over the nohz cores to load their rq
> remotely and do update_current etc. Presumably it should be possible
> for a single housekeeping core to handle doing this for all the
> nohz_full cores, as we only need to do it quite infrequently.
>
> Thomas suggested that this might be the last remaining thing that
> needed to be done to allow disabling the current behavior of falling
> back to a 1 Hz clock in nohz_full.
>
> I believe Thomas said he had a patch to do this already.
No, Riek was working on that.
> == Remote LRU cache drain ==
>
> One of the issues with task isolation currently is that the LRU cache
> drain must be done prior to entering userspace, but it requires
> interrupts enabled and thus can't be done atomically. My previous
> patch series have handled this by checking with interrupts disabled,
> but then looping around with interrupts enabled to try to drain the
> LRU pagevecs. Experimentally this works, but it's not provable that
> it terminates, which is worrisome. Andy suggested adding a percpu
> flag to disable creation of deferred work like LRU cache pages.
>
> Thomas suggested using an RT "local lock" to guard the LRU cache
> flush; he is planning on bringing the concept to mainline in any case.
> However, after some discussion we converged on simply using a spinlock
> to guard the appropriate resources. As a result, the
> lru_add_drain_all() code that currently queues work on each remote cpu
> to drain it, can instead simply acquire the lock and drain it remotely.
> This means that a task isolation task no longer needs to worry about
> being interrupted by SMP function call IPIs, so we don't have to deal
> with this in the task isolation framework any more.
>
> I don't recall anyone else volunteering to tackle this, so I will plan
> to look at it. The patch to do that should be orthogonal to the
> revised task isolation patch series.
I offered to clean up the patch from RT. I'll do that in the next days.
> == Missing oneshot_stopped callbacks ==
>
> I raised the issue that various clock_event_device sources don't
> always support oneshot_stopped, which can cause an additional
> final interrupt to occur after the timer infrastructure believes the
> interrupt has been stopped. I have patches to fix this for tile and
> arm64 in my patch series; Thomas volunteered to look at adding
> equivalent support for x86.
Right.
Thanks,
tglx
Powered by blists - more mailing lists