[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200608104424.10781990@gandalf.local.home>
Date: Mon, 8 Jun 2020 10:44:24 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Qais Yousef <qais.yousef@....com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
Mel Gorman <mgorman@...e.de>,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Randy Dunlap <rdunlap@...radead.org>,
Jonathan Corbet <corbet@....net>,
Juri Lelli <juri.lelli@...hat.com>,
Ben Segall <bsegall@...gle.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
Quentin Perret <qperret@...gle.com>,
Valentin Schneider <valentin.schneider@....com>,
Pavan Kondeti <pkondeti@...eaurora.org>,
linux-doc@...r.kernel.org,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-fs <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/uclamp: Add a new sysctl to control RT
default boost value
On Mon, 8 Jun 2020 13:31:03 +0100
Qais Yousef <qais.yousef@....com> wrote:
> I admit I don't know how much of these numbers is ftrace overhead. When trying
Note, if you want to get a better idea of how long a function runs, put it
into set_ftrace_filter, and then trace it. That way you remove the overhead
of the function graph tracer when its nesting within a function.
> to capture similar runs for uclamp, the numbers didn't add up compared to
> running the test without ftrace generating the graph. If juno is suffering from
> bad branching costs in this path, then I suspect ftrace will amplify this as
> AFAIU it'll cause extra jumps on entry and exit.
>
>
>
> sched-pipe-6532 [001] 9407.276302: funcgraph_entry: | deactivate_task() {
> sched-pipe-6532 [001] 9407.276302: funcgraph_entry: | dequeue_task_fair() {
> sched-pipe-6532 [001] 9407.276303: funcgraph_entry: | update_curr() {
> sched-pipe-6532 [001] 9407.276304: funcgraph_entry: 0.780 us | update_min_vruntime();
> sched-pipe-6532 [001] 9407.276306: funcgraph_entry: | cpuacct_charge() {
> sched-pipe-6532 [001] 9407.276306: funcgraph_entry: 0.820 us | __rcu_read_lock();
> sched-pipe-6532 [001] 9407.276308: funcgraph_entry: 0.740 us | __rcu_read_unlock();
The above is more accurate than...
> sched-pipe-6532 [001] 9407.276309: funcgraph_exit: 3.980 us | }
this one. Because this one has nested tracing within it.
-- Steve
> sched-pipe-6532 [001] 9407.276310: funcgraph_entry: 0.720 us | __rcu_read_lock();
> sched-pipe-6532 [001] 9407.276312: funcgraph_entry: 0.720 us | __rcu_read_unlock();
> sched-pipe-6532 [001] 9407.276313: funcgraph_exit: 9.840 us | }
> sched-pipe-6532 [001] 9407.276314: funcgraph_entry: | __update_load_avg_se() {
> sched-pipe-6532 [001] 9407.276315: funcgraph_entry: 0.720 us | __accumulate_pelt_segments();
> sched-pipe-6532 [001] 9407.276316: funcgraph_exit: 2.260 us | }
> sched-pipe-6532 [001] 9407.276317: funcgraph_entry: | __update_load_avg_cfs_rq() {
> sched-pipe-6532 [001] 9407.276318: funcgraph_entry: 0.860 us | __accumulate_pelt_segments();
> sched-pipe-6532 [001] 9407.276319: funcgraph_exit: 2.340 us | }
Powered by blists - more mailing lists