[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190418105334.5093528d@gandalf.local.home>
Date: Thu, 18 Apr 2019 10:53:34 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
Josh Poimboeuf <jpoimboe@...hat.com>, x86@...nel.org,
Andy Lutomirski <luto@...nel.org>,
Alexander Potapenko <glider@...gle.com>,
Alexey Dobriyan <adobriyan@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Pekka Enberg <penberg@...nel.org>, linux-mm@...ck.org,
David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...ux.com>,
Catalin Marinas <catalin.marinas@....com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
kasan-dev@...glegroups.com,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Akinobu Mita <akinobu.mita@...il.com>,
iommu@...ts.linux-foundation.org,
Robin Murphy <robin.murphy@....com>,
Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Johannes Thumshirn <jthumshirn@...e.de>,
David Sterba <dsterba@...e.com>, Chris Mason <clm@...com>,
Josef Bacik <josef@...icpanda.com>,
linux-btrfs@...r.kernel.org, dm-devel@...hat.com,
Mike Snitzer <snitzer@...hat.com>,
Alasdair Kergon <agk@...hat.com>,
intel-gfx@...ts.freedesktop.org,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
dri-devel@...ts.freedesktop.org, David Airlie <airlied@...ux.ie>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Daniel Vetter <daniel@...ll.ch>,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
linux-arch@...r.kernel.org
Subject: Re: [patch V2 21/29] tracing: Use percpu stack trace buffer more
intelligently
On Thu, 18 Apr 2019 10:41:40 +0200
Thomas Gleixner <tglx@...utronix.de> wrote:
> The per cpu stack trace buffer usage pattern is odd at best. The buffer has
> place for 512 stack trace entries on 64-bit and 1024 on 32-bit. When
> interrupts or exceptions nest after the per cpu buffer was acquired the
> stacktrace length is hardcoded to 8 entries. 512/1024 stack trace entries
> in kernel stacks are unrealistic so the buffer is a complete waste.
>
> Split the buffer into chunks of 64 stack entries which is plenty. This
> allows nesting contexts (interrupts, exceptions) to utilize the cpu buffer
> for stack retrieval and avoids the fixed length allocation along with the
> conditional execution pathes.
>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> ---
> kernel/trace/trace.c | 77 +++++++++++++++++++++++++--------------------------
> 1 file changed, 39 insertions(+), 38 deletions(-)
>
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -2749,12 +2749,21 @@ trace_function(struct trace_array *tr,
>
> #ifdef CONFIG_STACKTRACE
>
> -#define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long))
> +/* 64 entries for kernel stacks are plenty */
> +#define FTRACE_KSTACK_ENTRIES 64
> +
> struct ftrace_stack {
> - unsigned long calls[FTRACE_STACK_MAX_ENTRIES];
> + unsigned long calls[FTRACE_KSTACK_ENTRIES];
> };
>
> -static DEFINE_PER_CPU(struct ftrace_stack, ftrace_stack);
> +/* This allows 8 level nesting which is plenty */
Can we make this 4 level nesting and increase the size? (I can see us
going more than 64 deep, kernel developers never cease to amaze me ;-)
That's all we need:
Context: Normal, softirq, irq, NMI
Is there any other way to nest?
-- Steve
> +#define FTRACE_KSTACK_NESTING (PAGE_SIZE / sizeof(struct ftrace_stack))
> +
> +struct ftrace_stacks {
> + struct ftrace_stack stacks[FTRACE_KSTACK_NESTING];
> +};
> +
> +static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks);
> static DEFINE_PER_CPU(int, ftrace_stack_reserve);
>
>
Powered by blists - more mailing lists