[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJWu+oqzigK-23ZkY3aSVf5qFDZoGeL2vVzAKDfEDsVyfREcCw@mail.gmail.com>
Date: Fri, 6 Oct 2017 10:27:08 -0700
From: Joel Fernandes <joelaf@...gle.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>, kernel-team@...roid.com,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH v7 2/2] tracing: Add support for preempt and irq
enable/disable events
Hi Steve,
On Fri, Oct 6, 2017 at 6:38 AM, Steven Rostedt <rostedt@...dmis.org> wrote:
> On Fri, 6 Oct 2017 00:28:21 -0700
> Joel Fernandes <joelaf@...gle.com> wrote:
>
>
>> Oh ok. So I traced this down to the original patch that added
>> time_hardirqs_off to lockdep. I *think* it was added just to keep the
>> irqsoff tracer working while lockdep is enabled, so that both lockdep
>> and the tracer would work at the same time. So I guess nobody noticed
>> any issues because nobody noticed or cared to check if irqsoff tracer
>> showed the same results with lockdep enabled vs disabled.
>
> As one of the main users of preemptirqsoff tracer, I can tell you that
> we didn't care ;-)
Hahah :-)
> Yes, it was added to keep them both working. It's dangerous to use
> CALLER_ADDR1, as lockdep can be called by the top frame, and ADDR1
> would go off the stack frame and cause a fault. I just put in ADDR0
> as filler as it required something to be added.
Ok. Then maybe I wrecked lockdep a bit in my patch from last night
that added tracepoint probes for it onto the new tracepoints [1].
Although it booted fine without any warnings. Considering your above
comment, do you see this an issue?
Initially I didn't mess around with lockdep but then Peter told me he
felt its weird PROVE_LOCKING makes the tracepoints unavailable. So
that led me to the probes patch (which includes all the lesson's
learnt from my v8 series [2]).
(Steve, btw I think the PATCH v8 series [2] can be a first step and a
safe solution for 4.15 while I work on the probe approach [1] but I
leave it to you how you want to time it :-)).
> We added a stack dump on new max latencies to figure out where the
> problem occurred.
Got it, that makes sense. thanks a lot for the history of it.
thanks,
- Joel
[1] https://lkml.org/lkml/2017/10/6/106
[2] https://lkml.org/lkml/2017/10/5/896
[...]
Powered by blists - more mailing lists