lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 1 Apr 2009 17:22:51 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Pekka Enberg <penberg@...helsinki.fi>
Cc:	Mel Gorman <mel@....ul.ie>, Jason Baron <jbaron@...hat.com>,
	Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>,
	linux-kernel@...r.kernel.org, mm-commits@...r.kernel.org,
	alexn@....su.se, akpm@...ux-foundation.org, alexn@...ia.com,
	apw@...dowen.org, cl@...ux-foundation.org, haveblue@...ibm.com,
	kamezawa.hiroyu@...fujitu.com,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Steven Rostedt <rostedt@...dmis.org>,
	Fr?d?ric Weisbecker <fweisbec@...il.com>
Subject: Re: + page-owner-tracking.patch added to -mm tree


* Pekka Enberg <penberg@...helsinki.fi> wrote:

> On Wed, 2009-04-01 at 15:49 +0200, Ingo Molnar wrote:
> > And this info could be added to that, and it would sure be nice to 
> > hook it up to kmemtrace primarily, which does a lot of similar 
> > looking work in the slab space. (but Eduard and Pekka will know how 
> > feasible/interesting this is to them.)
> 
> Yup, makes sense to me. Something like this is probably a good 
> starting point for a proper patch.

looks like an excellent starting point.

> +kmemtrace_print_page_alloc_user(struct trace_iterator *iter,
> +				struct kmemtrace_page_alloc_entry *entry)
> +{
> +	struct kmemtrace_user_event_page_alloc *ev_alloc;
> +	struct trace_seq *s = &iter->seq;
> +	struct kmemtrace_user_event *ev;
> +
> +	ev = trace_seq_reserve(s, sizeof(*ev));
> +	if (!ev)
> +		return TRACE_TYPE_PARTIAL_LINE;
> +
> +	ev->event_id		= KMEMTRACE_USER_PAGE_ALLOC;
> +	ev->type_id		= entry->type_id;
> +	ev->event_size		= sizeof(*ev) + sizeof(*ev_alloc);
> +	ev->cpu			= iter->cpu;
> +	ev->timestamp		= iter->ts;
> +	ev->call_site		= 0ULL;	/* FIXME */
> +	ev->ptr			= 0ULL;	/* FIXME */

Here we could call save_stack_trace(), in a way like this, to save 
up to 8 entries of the allocation back-trace:

#define NR_ENTRIES		8

struct kmemtrace_user_event {
...
        unsigned long		entries[NR_ENTRIES];
...
};

        struct stack_trace trace;

        trace.nr_entries	= 0;
        trace.max_entries	= NR_ENTRIES;
        trace.entries		= ev->entries;
        trace.skip		= 2;

        save_stack_trace(&trace);

( the '2' for skip will skip the useless tracer-internal backtrace 
  bits. )

ftrace has built-in stacktrace capabilities as well - but they are 
not hooked up to the binary-tracing pathway yet - right Steve?

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ