lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Dec 2007 12:23:39 -0000
From:	"Metzger, Markus T" <markus.t.metzger@...el.com>
To:	"Ingo Molnar" <mingo@...e.hu>
Cc:	<ak@...e.de>, <hpa@...or.com>, <linux-kernel@...r.kernel.org>,
	<tglx@...utronix.de>, <markus.t.metzger@...il.com>,
	"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
	<roland@...hat.com>, <akpm@...ux-foundation.org>,
	<mtk.manpages@...il.com>, "Alan Stern" <stern@...land.harvard.edu>
Subject: RE: x86, ptrace: support for branch trace store(BTS)

>-----Original Message-----
>From: Ingo Molnar [mailto:mingo@...e.hu] 
>Sent: Mittwoch, 12. Dezember 2007 12:04

>> Would it be safe to drop the artificial limit and let the 
>limit be the 
>> available memory?
>
>no, that would be a DoS :-/

How about:

for (;;) {
	int pid = fork();
	if (pid) 
		ptrace_allocate(pid, <small_buffer_size>);
}


I guess what we really want is a per-task kalloc() limit. That limit can
be quite big.

Should this rather be within kalloc() itself (with the data stored in
task_struct) or should this be local to ptrace_bts_alloc?

Is there already a per-task memory limit? We could deduct the kalloc()ed
memory from that.


>> the most useful behavior in that case. I'm not sure what you 
>mean with 
>> 'instrumentation'.
>
>the branch trace can be used to generate a very finegrained 
>profile/histogram of code execution - even of rarely executed 
>codepaths 
>which cannot be captured via timer/perf-counter based profiling.
>
>another potential use would be for call graph coverage testing. (which 
>currently requires compiler-inserted calls - would be much nicer if we 
>could do this via the hardware.)
>
>etc. Branch tracing isnt just for debugging i think - as long as the 
>framework is flexible enough.

Agreed.


>> The actual physical memory consumption will be worse (or at best 
>> equal) compared to kalloc()ed memory, since we need to pin 
>down entire 
>> pages, whereas kalloc() would allocate just the memory that is 
>> actually needed.
>
>i agree that mlock() has problems. A different model would be: 

I have my doubts regarding any form of two-buffer approach. 

Let's assume we find a way to get a reasonably big limit. 

Users who want to process that huge amount of data would be better off
using a file-based approach (well, if it cannot be held in physical
memory, they will spend most of their time swapping, anyway). Those
users would typically wait for the 'buffer full' event and drain the
buffer into a file - whether this is the real buffer or a bigger virtual
buffer.

The two-buffer approach would only benefit users who want to hold the
full profile in memory - or who want to stall the debuggee until they
processed or somehow compressed the data collected so far.
Those approaches would not scale for very big profiles.
The small profile cases would already be covered with a reasonably big
real buffer.

The two-buffer approach is moving out the limit of what can be profiled
in (virtual) memory.


Independent of how many buffers we have, we need to offer overflow
handling. This could be:
1. cyclic buffer
2. buffer full event

Robust users would either:
a. select 2 and ignore the overflow signal to collect an initial profile
b. select 1 to collect a profile tail
c. select 2 and drain the real or virtual buffer into a file to collect
the full profile
d. select 2 and bail out on overflow (robust but limited usefulness)

Debuggers would typically choose b. Path profilers would typically
choose c. Student projects would typically choose d.


I see benefit in providing a second overflow handling method. 
I only see benefit in a bigger second buffer if we cannot get the limit
to a reasonable size.

Am I missing something?


thanks and regards,
markus.
---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ