[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190425132935.ae35l5oybby5ddgl@treble>
Date: Thu, 25 Apr 2019 08:29:35 -0500
From: Josh Poimboeuf <jpoimboe@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
Andy Lutomirski <luto@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Alexander Potapenko <glider@...gle.com>,
Alexey Dobriyan <adobriyan@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, linux-mm@...ck.org,
David Rientjes <rientjes@...gle.com>,
Catalin Marinas <catalin.marinas@....com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
kasan-dev@...glegroups.com,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Akinobu Mita <akinobu.mita@...il.com>,
Christoph Hellwig <hch@....de>,
iommu@...ts.linux-foundation.org,
Robin Murphy <robin.murphy@....com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Johannes Thumshirn <jthumshirn@...e.de>,
David Sterba <dsterba@...e.com>, Chris Mason <clm@...com>,
Josef Bacik <josef@...icpanda.com>,
linux-btrfs@...r.kernel.org, dm-devel@...hat.com,
Mike Snitzer <snitzer@...hat.com>,
Alasdair Kergon <agk@...hat.com>,
Daniel Vetter <daniel@...ll.ch>,
intel-gfx@...ts.freedesktop.org,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
dri-devel@...ts.freedesktop.org, David Airlie <airlied@...ux.ie>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
Tom Zanussi <tom.zanussi@...ux.intel.com>,
Miroslav Benes <mbenes@...e.cz>, linux-arch@...r.kernel.org
Subject: Re: [patch V3 21/29] tracing: Use percpu stack trace buffer more
intelligently
On Thu, Apr 25, 2019 at 11:45:14AM +0200, Thomas Gleixner wrote:
> @@ -2788,29 +2798,32 @@ static void __ftrace_trace_stack(struct
> */
> preempt_disable_notrace();
>
> - use_stack = __this_cpu_inc_return(ftrace_stack_reserve);
> + stackidx = __this_cpu_inc_return(ftrace_stack_reserve);
> +
> + /* This should never happen. If it does, yell once and skip */
> + if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING))
> + goto out;
> +
> /*
> - * We don't need any atomic variables, just a barrier.
> - * If an interrupt comes in, we don't care, because it would
> - * have exited and put the counter back to what we want.
> - * We just need a barrier to keep gcc from moving things
> - * around.
> + * The above __this_cpu_inc_return() is 'atomic' cpu local. An
> + * interrupt will either see the value pre increment or post
> + * increment. If the interrupt happens pre increment it will have
> + * restored the counter when it returns. We just need a barrier to
> + * keep gcc from moving things around.
> */
> barrier();
> - if (use_stack == 1) {
> - trace.entries = this_cpu_ptr(ftrace_stack.calls);
> - trace.max_entries = FTRACE_STACK_MAX_ENTRIES;
> -
> - if (regs)
> - save_stack_trace_regs(regs, &trace);
> - else
> - save_stack_trace(&trace);
> -
> - if (trace.nr_entries > size)
> - size = trace.nr_entries;
> - } else
> - /* From now on, use_stack is a boolean */
> - use_stack = 0;
> +
> + fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
nit: it would be slightly less surprising if stackidx were 0-based:
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index d3f6ec7eb729..4fc93004feab 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2798,10 +2798,10 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
*/
preempt_disable_notrace();
- stackidx = __this_cpu_inc_return(ftrace_stack_reserve);
+ stackidx = __this_cpu_inc_return(ftrace_stack_reserve) - 1;
/* This should never happen. If it does, yell once and skip */
- if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING))
+ if (WARN_ON_ONCE(stackidx > FTRACE_KSTACK_NESTING))
goto out;
/*
@@ -2813,7 +2813,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
*/
barrier();
- fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
+ fstack = this_cpu_ptr(ftrace_stacks.stacks) + stackidx;
trace.entries = fstack->calls;
trace.max_entries = FTRACE_KSTACK_ENTRIES;
Powered by blists - more mailing lists