[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160421151707.GB3677@kernel.org>
Date: Thu, 21 Apr 2016 12:17:07 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Frederic Weisbecker <fweisbec@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Brendan Gregg <brendan.d.gregg@...il.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Alexei Starovoitov <ast@...nel.org>,
David Ahern <dsahern@...il.com>, He Kuang <hekuang@...wei.com>,
Jiri Olsa <jolsa@...hat.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Milian Wolff <milian.wolff@...b.com>,
Namhyung Kim <namhyung@...nel.org>,
Stephane Eranian <eranian@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vince Weaver <vincent.weaver@...ne.edu>,
Wang Nan <wangnan0@...wei.com>, Zefan Li <lizefan@...wei.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH/RFC] perf core: Allow setting up max frame stack depth
via sysctl
Em Thu, Apr 21, 2016 at 12:48:58PM +0200, Peter Zijlstra escreveu:
> On Wed, Apr 20, 2016 at 07:47:30PM -0300, Arnaldo Carvalho de Melo wrote:
> > The default remains 127, which is good for most cases, and not even hit
> > most of the time, but then for some cases, as reported by Brendan, 1024+
> > deep frames are appearing on the radar for things like groovy, ruby.
> yea gawds ;-)
> > +++ b/kernel/events/callchain.c
> > @@ -73,7 +81,7 @@ static int alloc_callchain_buffers(void)
> > if (!entries)
> > return -ENOMEM;
> >
> > - size = sizeof(struct perf_callchain_entry) * PERF_NR_CONTEXTS;
> > + size = perf_callchain_entry__sizeof() * PERF_NR_CONTEXTS;
> >
> > for_each_possible_cpu(cpu) {
> > entries->cpu_entries[cpu] = kmalloc_node(size, GFP_KERNEL,
>
> And this alloc _will_ fail if you put in a decent sized value..
>
> Should we put in a dmesg WARN if this alloc fails and
> perf_event_max_stack is 'large' ?
Unsure, it already returns -ENOMEM, see, some lines above, i.e. it
better have error handling up this, ho-hum, call chain, I'm checking...
> > @@ -215,3 +223,25 @@ exit_put:
> >
> > return entry;
> > }
> > +
> > +int perf_event_max_stack_handler(struct ctl_table *table, int write,
> > + void __user *buffer, size_t *lenp, loff_t *ppos)
> > +{
> > + int new_value = sysctl_perf_event_max_stack, ret;
> > + struct ctl_table new_table = *table;
> > +
> > + new_table.data = &new_value;
> cute :-)
Hey, I found it on sysctl_schedstats() and sysctl_numa_balancing(), as a
way to read that value but only make it take effect if some condition
was true (nr_callchain_events == 0 in this case), granted, could be
better, less clever, but I leave this for later ;-)
> > + ret = proc_dointvec_minmax(&new_table, write, buffer, lenp, ppos);
> > + if (ret || !write)
> > + return ret;
> > +
> > + mutex_lock(&callchain_mutex);
> > + if (atomic_read(&nr_callchain_events))
> > + ret = -EBUSY;
> > + else
> > + sysctl_perf_event_max_stack = new_value;
> > +
> > + mutex_unlock(&callchain_mutex);
> > +
> > + return ret;
> > +}
Powered by blists - more mailing lists