lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 4 Aug 2016 11:19:46 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Janani Ravichandran <janani.rvchndrn@...il.com>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org, riel@...riel.com,
	akpm@...ux-foundation.org, hannes@...pxchg.org,
	vdavydov@...tuozzo.com, mhocko@...e.com, vbabka@...e.cz,
	mgorman@...hsingularity.net, kirill.shutemov@...ux.intel.com,
	bywxiaobai@....com
Subject: Re: [PATCH 1/2] mm: page_alloc.c: Add tracepoints for slowpath

On Fri, 29 Jul 2016 01:41:20 +0530
Janani Ravichandran <janani.rvchndrn@...il.com> wrote:

Sorry for the late reply, I've been swamped with other things since
coming back from my vacation.

> I looked at function graph trace, as you’d suggested. I saw that I could set a threshold 
> there using tracing_thresh. But the problem was that slowpath trace information was printed
> for all the cases (even when __alloc_pages_nodemask latencies were below the threshold).
> Is there a way to print tracepoint information only when __alloc_pages_nodemask
> exceeds the threshold?

One thing you could do is to create your own module and hook into the
function graph tracer yourself!

It would require a patch to export two functions in
kernel/trace/ftrace.c:

 register_ftrace_graph()
 unregister_ftrace_graph()

Note, currently only one user of these functions is allowed at a time.
If function_graph tracing is already enabled, the register function
will return -EBUSY.

You pass in a "retfunc" and a "entryfunc" (I never understood why they
were backwards), and these are the functions that are called when a
function returns and when a function is entered respectively.

The retfunc looks like this:

static void my_retfunc(struct ftrace_graph_ret *trace)
{
	[...]
}

static int my_entryfunc(struct ftrace_graph_ent *trace)
{
	[...]
}


The ftrace_graph_ret structure looks like this:

struct ftrace_graph_ret {
	unsigned long func;
	unsigned long overrun;
	unsigned long calltime;
	unsigned long rettime;
	int depth;
};

Where func is actually the instruction pointer of the function that is
being traced.

You can ignore "overrun".

calltime is the trace_clock_local() (sched_clock() like timestamp) of
when the function was entered.

rettime is the trace_clock_local() timestamp of when the function
returns.

 rettime - calltime is the time difference of the entire function.

And that's the time you want to look at.

depth is how deep into the call chain the current function is. There's
a limit (50 I think), of how deep it will record, and anything deeper
will go into that "overrun" field I told you to ignore.


Hmm, looking at the code, it appears setting tracing_thresh should
work. Could you show me exactly what you did?

Either way, adding your own function graph hook may be a good exercise
in seeing how all this works.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ