[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <9D639468-2A70-4620-8BF5-C8B2FBB38A99@gmail.com>
Date: Fri, 5 Aug 2016 21:33:25 +0530
From: Janani Ravichandran <janani.rvchndrn@...il.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, riel@...riel.com,
akpm@...ux-foundation.org, hannes@...pxchg.org,
vdavydov@...tuozzo.com, mhocko@...e.com, vbabka@...e.cz,
mgorman@...hsingularity.net, kirill.shutemov@...ux.intel.com,
bywxiaobai@....com
Subject: Re: [PATCH 1/2] mm: page_alloc.c: Add tracepoints for slowpath
> On Aug 4, 2016, at 8:49 PM, Steven Rostedt <rostedt@...dmis.org> wrote:
>
> On Fri, 29 Jul 2016 01:41:20 +0530
> Janani Ravichandran <janani.rvchndrn@...il.com> wrote:
>
> Sorry for the late reply, I've been swamped with other things since
> coming back from my vacation.
>
No problem!
>
> Hmm, looking at the code, it appears setting tracing_thresh should
> work. Could you show me exactly what you did?
>
Sure. I wanted to observe how long it took to allocate pages and also how
long functions in the direct reclaim path took.
So I did:
echo function_graph > current_tracer
echo __alloc_pages_nodemask > set_graph_function
echo __alloc_pages_nodemask >> set_ftrace_filter
echo do_try_to_free_pages >> set_ftrace_filter
echo shrink_zone >> set_ftrace_filter
echo mem_cgroup_softlimit_reclaim >> set_ftrace_filter
echo shrink_zone_memcg >> set_ftrace_filter
echo shrink_slab >> set_ftrace_filter
echo shrink_list >> set_ftrace_filter
echo shrink_active_list >> set_ftrace_filter
echo shrink_inactive_list >> set_ftrace_filter
echo 20 > tracing_thresh
echo 1 > events/vmscan/mm_shrink_slab_start/enable
echo 1 > events/vmscan/mm_shrink_slab_end/enable
echo 1 > events/vmscan/mm_vmscan_direct_reclaim_begin/enable
echo 1 > events/vmscan/mm_vmscan_direct_reclaim_end/enable
Rik had suggested that it’d be good to write only the tracepoint info related to
high latencies to disk. Because otherwise, there’s a lot of information from
the tracepoints. Filtering them out would greatly reduce disk I/O.
What I first tried with begin/end tracepoints was simply use their timestamps
to calculate duration and write the tracepoint info to disk only if it exceeded a
certain number.
The function graph output is great when
a, no thresholds or tracepoints are set (with those aforementioned functions
used as filters).
Here is a sample output.
https://github.com/Jananiravichandran/Analyzing-tracepoints/blob/master/no_tp_no_threshold.txt
Lines 372 to 474 clearly show durations of functions and the output is helpful
to observe how long each function took.
b, no thresholds are set and the tracepoints are enabled to get some
additional information.
https://github.com/Jananiravichandran/Analyzing-tracepoints/blob/master/set_tp_no_threshold.txt
Lines 785 to 916 here clearly show which tracepoints were invoked within which
function calls as the beginning and end of functions are clearly marked.
c, A threshold (20 in this case) is set but no tracepoints are enabled.
https://github.com/Jananiravichandran/Analyzing-tracepoints/blob/master/no_tp_set_threshold.txt
Lines 230 to 345 only show functions which exceeded the threshold.
But there’s a problem when a threshold is set and the tracepoints are enabled. It
is difficult to know the subset of the total tracepoint info printed that was actually
part of the functions that took longer than the threshold to execute (as there is no
info indicating the beginning of functions unlike case b, mentioned above).
For example,
between lines 59 and 75 here:
https://github.com/Jananiravichandran/Analyzing-tracepoints/blob/master/set_tp_set_threshold.txt
we can see that there was a call to shrink_zone() which took 54.141 us
(greater than 20, the threshold). We also see a lot of tracepoint information
printed between lines 59 and 74. But it is not possible for us to filter out
only the tracepoint info that belongs to the shrink_zone() call that took 54.141
us as it is possible that some of the information was printed as part of
other shrink_zone() calls which took less than the threshold and therefore
did not make it to the output file.
So, it’s the filtering of anomalous data from tracepoints that I find difficult while
using function_graphs.
> Either way, adding your own function graph hook may be a good exercise
> in seeing how all this works.
Thank you for your suggestions regarding the function graph hook! I will try
it and see if there’s anything I can come up with!
Janani.
>
> -- Steve
Powered by blists - more mailing lists