lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 9 Oct 2013 07:59:58 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	David Ahern <dsahern@...il.com>
Cc:	acme@...stprotocols.net, linux-kernel@...r.kernel.org,
	Frederic Weisbecker <fweisbec@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Jiri Olsa <jolsa@...hat.com>,
	Namhyung Kim <namhyung@...nel.org>,
	Mike Galbraith <efault@....de>,
	Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH 3/3] perf record: mmap output file


* David Ahern <dsahern@...il.com> wrote:

> When recording raw_syscalls for the entire system, e.g.,
>     perf record -e raw_syscalls:*,sched:sched_switch -a -- sleep 1
> 
> you end up with a negative feedback loop as perf itself calls
> write() fairly often. This patch handles the problem by mmap'ing the
> file in chunks of 64M at a time and copies events from the event buffers
> to the file avoiding write system calls.
> 
> Before (with write syscall):
> 
> perf record -o /tmp/perf.data -e raw_syscalls:*,sched:sched_switch -a -- sleep 1
> [ perf record: Woken up 0 times to write data ]
> [ perf record: Captured and wrote 81.843 MB /tmp/perf.data (~3575786 samples) ]
> 
> After (using mmap):
> 
> perf record -o /tmp/perf.data -e raw_syscalls:*,sched:sched_switch -a -- sleep 1
> [ perf record: Woken up 31 times to write data ]
> [ perf record: Captured and wrote 8.203 MB /tmp/perf.data (~358388 samples) ]
> 
> In addition to perf-trace benefits using mmap lowers the overhead of
> perf-record. For example,
> 
>   perf stat -i -- perf record -g -o /tmp/perf.data openssl speed aes
> 
> showsi a drop in time, CPU cycles, and instructions all drop by more than a
> factor of 3. Jiri also ran a test that showed a big improvement.

Here are some thoughts on how 'perf record' tracing performance could be 
further improved:

1)

The use of non-temporal stores (MOVNTQ) to copy the ring-buffer into the 
file buffer makes sure the CPU cache is not trashed by the copying - which 
is the largest 'collateral damage' copying does.

glibc does not appear to expose non-temporal instructions so it's going to 
be architecture dependent - but we could build the copy_user_nocache() 
function from the kernel proper (or copy it - we could even simplify it: 
knowing that only large and page aligned buffers are going to be copied 
with it).

See how tools/perf/bench/mem-mem* does that to be able to measure the 
kernel's memcpy() and memset() function performance.

2)

Yet another method would be to avoid the copies altogether via the splice 
system-call - see:

	git grep splice kernel/trace/

To make splice low-overhead we'd have to introduce a mode to not mmap the 
data part of the perf ring-buffer and splice the data straight from the 
perf fd into a temporary pipe and over from the pipe into the target file 
(or socket).

OTOH non-temporal stores are incredibly simple and memory bandwidth is 
plenty on modern systems so I'd certainly try that route first.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ