lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 19 Nov 2013 11:02:18 +0900
From:	Namhyung Kim <>
To:	David Ahern <>
Cc:	Ingo Molnar <>,
	Peter Zijlstra <>,
	Arnaldo Carvalho de Melo <>,,,
	Frederic Weisbecker <>,
	Mike Galbraith <>,
	Stephane Eranian <>
Subject: Re: [PATCH 4/5] perf record: mmap output file - v5

On Mon, 18 Nov 2013 17:34:49 -0700, David Ahern wrote:
> On 11/18/13, 5:24 PM, Namhyung Kim wrote:
>>>>> What now? Can we add the mmap path as an option?
>>>> I'd say an option is always a possibility, but someone please try
>>>> what happens if you use stupid large events (dwarf stack copies) on
>>>> PERF_COUNT_SW_PAGE_FAULTS (.period=1) while recording with mmap().
>>>> The other option is to simply disallow PERF_SAMPLE_STACK_USER for
>>>> that event.
>>>> Personally I think 8k copies for every event are way stupid anyway,
>>>> that's a metric ton of data at a huge cost.
>>> Well, with 1 khz sampling of a single threaded workload it's 8MB per
>>> second - that's 80 MB for 10 seconds profiling - not the end of the
>>> world.
>> We now use 4 khz sampling frequency by default, just FYI. :)
> I think Peter is asking about:
>     perf record -e faults -c 1 --call-graph dwarf,8192 -a -- sleep 1
> And as expected it is a massive feedback spiraling out of control.

How about adding an option to exclude the perf tools from recording for
system-wide (or cpu-wide) session?

This way, we can prevent the feedback loops for page-fault or syscall
events you mentioned IMHO.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists