[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150218152100.GZ5029@twins.programming.kicks-ass.net>
Date: Wed, 18 Feb 2015 16:21:00 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...hat.com>, Jiri Kosina <jkosina@...e.cz>,
Seth Jennings <sjenning@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] sched: add sched_task_call()
On Tue, Feb 17, 2015 at 03:25:32PM -0600, Josh Poimboeuf wrote:
> > And I'm assuming you're hard relying on CONFIG_FRAMEPOINTER here,
> > because otherwise x86 stacks are a mess too.
>
> Yeah, it'll rely on CONFIG_FRAME_POINTER. IIUC, the arches we care
> about now (x86, power, s390, arm64) all have frame pointer support, and
> the stack formats are all sane, AFAICT.
>
> If we ever port livepatch to a more obscure arch for which walking the
> stack is more difficult, we'll have several options:
>
> a) spend the time to ensure the unwinding code is correct and resilient
> to errors;
>
> b) leave the consistency model compiled code out if !FRAME_POINTER and
> allow users to patch without one (similar to the livepatch code
> that's already in the livepatch tree today); or
Which has a much more limited set of patches it can do, right?
> c) not support that arch.
Which would be sad of course.
> > And then hope you can get a better trace next time around? Or will you
> > fall-back to an alternative method of patching?
>
> Yeah, on second thought, we wouldn't have to cancel the patch. We could
> defer to check the task's stack again at a later time. If it's stuck
> there, the user can try sending it a signal to unstick it, or cancel the
> patching process. Those mechanisms are already in place with my
> consistency model patch set.
>
> I'd also do a WARN_ON_ONCE and a dump of the full stack data, since I'm
> guessing it would either indicate an error in the unwinding code or
> point us to an unexpected stack condition.
So uhm, what happens if your target task is running? When will you
retry? The problem I see is that if you do a sample approach you might
never hit an opportune moment.
> > I'm further thinking we don't actually need 2 (or more) different means
> > of live patching in the kernel. So you all had better sit down (again)
> > and come up with something you all agree on.
>
> Yeah, I also _really_ want to avoid multiple consistency models.
>
> In fact, that's a big motivation behind my consistency model patch set.
> It's heavily inspired by a suggestion from Vojtech [1]. It combines
> kpatch (backtrace checking) with kGraft (per-thread consistency). It
> has several advantages over either of the individual approaches. See
> http://lwn.net/Articles/632582/ where I describe its pros over both
> kpatch and kGraft.
>
> Jiri, would you and Vojtech agree that the proposed consistency model is
> all we need? Or do you still want to do the multiple consistency model
> thing?
Skimmed that thread; you all mostly seem to agree that one would be good
but not quite agree on which one.
And I note, not all seem to require this stack walking stuff.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists