[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrX80akvpLNRQfJsDV560npSa33hSsUB5OYkAtnAn8R7Dg@mail.gmail.com>
Date: Tue, 30 Aug 2016 12:50:42 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Chris Metcalf <cmetcalf@...lanox.com>
Cc: "linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Christoph Lameter <cl@...ux.com>,
Michal Hocko <mhocko@...e.com>,
Gilad Ben Yossef <giladb@...lanox.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux API <linux-api@...r.kernel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Ingo Molnar <mingo@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Tejun Heo <tj@...nel.org>, Will Deacon <will.deacon@....com>,
Rik van Riel <riel@...hat.com>,
Frederic Weisbecker <fweisbec@...il.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Catalin Marinas <catalin.marinas@....com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH v15 04/13] task_isolation: add initial support
On Tue, Aug 30, 2016 at 12:37 PM, Chris Metcalf <cmetcalf@...lanox.com> wrote:
> On 8/30/2016 2:43 PM, Andy Lutomirski wrote:
>>
>> On Aug 30, 2016 10:02 AM, "Chris Metcalf" <cmetcalf@...lanox.com> wrote:
>>>
>>> On 8/30/2016 12:30 PM, Andy Lutomirski wrote:
>>>>
>>>> On Tue, Aug 30, 2016 at 8:32 AM, Chris Metcalf <cmetcalf@...lanox.com>
>>>> wrote:
>>>>>
>>>>> The basic idea is just that we don't want to be at risk from the
>>>>> dyntick getting enabled. Similarly, we don't want to be at risk of a
>>>>> later global IPI due to lru_add_drain stuff, for example. And, we may
>>>>> want to add additional stuff, like catching kernel TLB flushes and
>>>>> deferring them when a remote core is in userspace. To do all of this
>>>>> kind of stuff, we need to run in the return to user path so we are
>>>>> late enough to guarantee no further kernel things will happen to
>>>>> perturb our carefully-arranged isolation state that includes dyntick
>>>>> off, per-cpu lru cache empty, etc etc.
>>>>
>>>> None of the above should need to *loop*, though, AFAIK.
>>>
>>> Ordering is a problem, though.
>>>
>>> We really want to run task isolation last, so we can guarantee that
>>> all the isolation prerequisites are met (dynticks stopped, per-cpu lru
>>> cache empty, etc). But achieving that state can require enabling
>>> interrupts - most obviously if we have to schedule, e.g. for vmstat
>>> clearing or whatnot (see the cond_resched in refresh_cpu_vm_stats), or
>>> just while waiting for that last dyntick interrupt to occur. I'm also
>>> not sure that even something as simple as draining the per-cpu lru
>>> cache can be done holding interrupts disabled throughout - certainly
>>> there's a !SMP code path there that just re-enables interrupts
>>> unconditionally, which gives me pause.
>>>
>>> At any rate at that point you need to retest for signals, resched,
>>> etc, all as usual, and then you need to recheck the task isolation
>>> prerequisites once more.
>>>
>>> I may be missing something here, but it's really not obvious to me
>>> that there's a way to do this without having task isolation integrated
>>> into the usual return-to-userspace loop.
>>>
>> What if we did it the other way around: set a percpu flag saying
>> "going quiescent; disallow new deferred work", then finish all
>> existing work and return to userspace. Then, on the next entry, clear
>> that flag. With the flag set, vmstat would just flush anything that
>> it accumulates immediately, nothing would be added to the LRU list,
>> etc.
>
>
> This is an interesting idea!
>
> However, there are a number of implementation ideas that make me
> worry that it might be a trickier approach overall.
>
> First, "on the next entry" hides a world of hurt in four simple words.
> Some platforms (arm64 and tile, that I'm familiar with) have a common
> chunk of code that always runs on every entry to the kernel. It would
> not be too hard to poke at the assembly and make those platforms
> always run some task-isolation specific code on entry. But x86 scares
> me - there seem to be a whole lot of ways to get into the kernel, and
> I'm not convinced there is a lot of shared macrology or whatever that
> would make it straightforward to intercept all of them.
Just use the context tracking entry hook. It's 100% reliable. The
relevant x86 function is enter_from_user_mode(), but I would just hook
into user_exit() in the common code. (This code is had better be
reliable, because context tracking depends on it, and, if context
tracking doesn't work on a given arch, then isolation isn't going to
work regardless.
>
> Then, there are the two actual subsystems in question. It looks like
> we could intercept LRU reasonably cleanly by hooking pagevec_add()
> is to return zero when we are in this "going quiescent" mode, and that
> would keep the per-cpu vectors empty. The vmstat stuff is a little
> trickier since all the existing code is built around updating the per-cpu
> stuff and then only later copying it off to the global state. I suppose
> we could add a test-and-flush at the end of every public API and not
> worry about the implementation cost.
Seems reasonable to me. If anyone cares about the performance hit,
they can fix it.
>
> But it does seem like we are adding noticeable maintenance cost on
> the mainline kernel to support task isolation by doing this. My guess
> is that it is easier to support the kind of "are you clean?" / "get clean"
> APIs for subsystems, rather than weaving a whole set of "stay clean"
> mechanism into each subsystem.
My intuition is that it's the other way around. For the mainline
kernel, having a nice clean well-integrated implementation is nicer
than having a bolted-on implementation that interacts in potentially
complicated ways. Once quiescence support is in mainline, the size of
the diff or the degree to which it's scattered around is irrelevant
because it's not a diff any more.
>
> So to pop up a level, what is your actual concern about the existing
> "do it in a loop" model? The macrology currently in use means there
> is zero cost if you don't configure TASK_ISOLATION, and the software
> maintenance cost seems low since the idioms used for task isolation
> in the loop are generally familiar to people reading that code.
My concern is that it's not obvious to readers of the code that the
loop ever terminates. It really ought to, but it's doing something
very odd. Normally we can loop because we get scheduled out, but
actually blocking in the return-to-userspace path, especially blocking
on a condition that doesn't have a wakeup associated with it, is odd.
>
>> Also, this cond_resched stuff doesn't worry me too much at a
>> fundamental level -- if we're really going quiescent, shouldn't we be
>> able to arrange that there are no other schedulable tasks on the CPU
>> in question?
>
>
> We aren't currently planning to enforce things in the scheduler, so if
> the application affinitizes another task on top of an existing task
> isolation task, by default the task isolation task just dies. (Unless
> it's using NOSIG mode, in which case it just ends up stuck in the
> kernel trying to wait out the dyntick until you either kill it, or
> re-affinitize the offending task.) But I'm reluctant to guarantee
> every possible way that you might (perhaps briefly) have some
> schedulable task, and the current approach seems pretty robust if that
> sort of thing happens.
>
This kind of waiting out the dyntick scares me. Why is there ever a
dyntick that you're waiting out? If quiescence is to be a supported
mainline feature, shouldn't the scheduler be integrated well enough
with it that you don't need to wait like this?
Powered by blists - more mailing lists