lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081230081600.GD2455@elte.hu>
Date:	Tue, 30 Dec 2008 09:16:00 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Pekka Enberg <penberg@...helsinki.fi>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>
Subject: Re: [PATCH] tracing/kmemtrace: normalize the raw tracer event to
	the unified tracing API


* Pekka Enberg <penberg@...helsinki.fi> wrote:

> Hi Frederic,
> 
> On Mon, 2008-12-29 at 23:09 +0100, Frederic Weisbecker wrote:
> > Pekka, note that I would be pleased to add statistical tracing on
> > this tracer, but I would need a hashtable, or an array, or a list, or whatever
> > iterable to insert the data into the stat tracing api.
> > 
> > But I don't know your projects about this... whether you wanted to use a section
> > or something else...
> 
> It really depends on what we're tracing. If we're interested in just the 
> allocation hotspots, a section will do just fine. However, if we're 
> tracing memory footprint, we need to take into store the object pointer 
> returned from kmalloc() and kmem_cache_alloc() so we can update 
> call-site statistics properly upon kfree().
> 
> So I suppose we need both, a section for per call-site statistics and a 
> hash table for the object -> call-site mapping.

1)

i think the call_site based tracking should be a built-in capability - the 
branch tracer needs that too for example. That would also make it very 
simple on the usage place: you wouldnt have to worry about sections in 
slub.c/etc.

2)

i think a possibly useful intermediate object would be the slab cache 
itself, which could be the basis for some highlevel stats too. It would 
probably overlap /proc/slabinfo statistics but it's a natural part of this 
abstraction i think.

3)

the most lowlevel (and hence most allocation-footprint sensitive) object 
to track would be the memory object itself. I think the best approach 
would be to do a static, limited size hash that could track up to N memory 
objects.

The advantage of such an approach is that it does not impact allocation 
patterns at all (besides the one-time allocation cost of the hash itself 
during tracer startup).

The disadvantage is when an overflow happens: the sizing heuristics would 
get the size correct most of the time anyway, so it's not a practical 
issue. There would be some sort of sizing control similar to 
/debug/tracing/buffer_size_kb, and a special trace entry that signals an 
'overflow' of the hash table. (in that case we wont track certain objects 
- but it would be clear from the trace output what happens and the hash 
size can be adjusted.)

Another advantage would be that it would trivially not interact with any 
allocator - because the hash itself would never 'allocate' in any dynamic 
way. Either there are free entries available (in which case we use it), or 
not - in which case we emit an hash-overflow trace entry.

And this too would be driven from ftrace mainly - the SLAB code would only 
offer the alloc+free callbacks with the object IDs. [ and this means that 
we could detect memory leaks by looking at the hash table and print out 
the age of entries :-) ]

How does this sound to you?

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ