[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrVKt8k11ewSOvGiCNsqgtD5cMaLix8Tf8JJakgodJeLyA@mail.gmail.com>
Date: Fri, 29 Apr 2016 13:32:53 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Jessica Yu <jeyu@...hat.com>, Jiri Kosina <jikos@...nel.org>,
Miroslav Benes <mbenes@...e.cz>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Michael Ellerman <mpe@...erman.id.au>,
Heiko Carstens <heiko.carstens@...ibm.com>,
live-patching@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
X86 ML <x86@...nel.org>, linuxppc-dev@...ts.ozlabs.org,
"linux-s390@...r.kernel.org" <linux-s390@...r.kernel.org>,
Vojtech Pavlik <vojtech@...e.com>, Jiri Slaby <jslaby@...e.cz>,
Petr Mladek <pmladek@...e.com>,
Chris J Arges <chris.j.arges@...onical.com>,
Andy Lutomirski <luto@...nel.org>
Subject: Re: [RFC PATCH v2 05/18] sched: add task flag for preempt IRQ tracking
On Fri, Apr 29, 2016 at 1:27 PM, Josh Poimboeuf <jpoimboe@...hat.com> wrote:
> On Fri, Apr 29, 2016 at 01:19:23PM -0700, Andy Lutomirski wrote:
>> On Fri, Apr 29, 2016 at 1:11 PM, Josh Poimboeuf <jpoimboe@...hat.com> wrote:
>> > On Fri, Apr 29, 2016 at 11:06:53AM -0700, Andy Lutomirski wrote:
>> >> On Thu, Apr 28, 2016 at 1:44 PM, Josh Poimboeuf <jpoimboe@...hat.com> wrote:
>> >> > A preempted function might not have had a chance to save the frame
>> >> > pointer to the stack yet, which can result in its caller getting skipped
>> >> > on a stack trace.
>> >> >
>> >> > Add a flag to indicate when the task has been preempted so that stack
>> >> > dump code can determine whether the stack trace is reliable.
>> >>
>> >> I think I like this, but how do you handle the rather similar case in
>> >> which a task goes to sleep because it's waiting on IO that happened in
>> >> response to get_user, put_user, copy_from_user, etc?
>> >
>> > Hm, good question. I was thinking that page faults had a dedicated
>> > stack, but now looking at the entry and traps code, that doesn't seem to
>> > be the case.
>> >
>> > Anyway I think it shouldn't be a problem if we make sure that any kernel
>> > function which might trigger a valid page fault (e.g.,
>> > copy_user_generic_string) do the proper frame pointer setup first. Then
>> > the stack should still be reliable.
>> >
>> > In fact I might be able to teach objtool to enforce that: any function
>> > which uses an exception table should create a stack frame.
>> >
>> > Or alternatively, maybe set some kind of flag for page faults, similar
>> > to what I did with this patch.
>> >
>>
>> How about doing it the other way around: teach the unwinder to detect
>> when it hits a non-outermost entry (i.e. it lands in idtentry, etc)
>> and use some reasonable heuristic as to whether it's okay to keep
>> unwinding. You should be able to handle preemption like that, too --
>> the unwind process will end up in an IRQ frame.
>
> How exactly would the unwinder detect if a text address is in an
> idtentry? Maybe put all the idt entries in a special ELF section?
>
Hmm.
What actually happens when you unwind all the way into the entry code?
Don't you end up in something that isn't in an ELF function? Can you
detect that? Ideally, the unwinder could actually detect that it's
hit a pt_regs struct and report that. If used for stack dumps, it
could display some indication of this and then continue its unwinding
by decoding the pt_regs. If used for patching, it could take some
other appropriate action.
I would have no objection to annotating all the pt_regs-style entry
code, whether by putting it in a separate section or by making a table
of addresses.
There are a couple of nasty cases if NMI or MCE is involved but, as of
4.6, outside of NMI, MCE, and vmalloc faults (ugh!), there should
always be a complete pt_regs on the stack before interrupts get
enabled for each entry. Of course, finding the thing may be
nontrivial in case other things were pushed. I suppose we could try
to rejigger the code so that rbp points to pt_regs or similar.
--Andy
Powered by blists - more mailing lists