lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87haci9e31.fsf@sejong.aot.lge.com>
Date:	Wed, 16 Oct 2013 10:52:50 +0900
From:	Namhyung Kim <namhyung@...nel.org>
To:	David Ahern <dsahern@...il.com>
Cc:	acme@...stprotocols.net, linux-kernel@...r.kernel.org,
	Ingo Molnar <mingo@...nel.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Jiri Olsa <jolsa@...hat.com>, Mike Galbraith <efault@....de>,
	Stephane Eranian <eranian@...gle.com>
Subject: Re: [PATCH] perf record: mmap output file - v2

On Tue, 15 Oct 2013 07:35:53 -0600, David Ahern wrote:
> On 10/15/13 1:31 AM, Namhyung Kim wrote:
>> Hi David,
>>
>> On Mon, 14 Oct 2013 20:55:31 -0600, David Ahern wrote:
>>> When recording raw_syscalls for the entire system, e.g.,
>>>      perf record -e raw_syscalls:*,sched:sched_switch -a -- sleep 1
>>>
>>> you end up with a negative feedback loop as perf itself calls
>>> write() fairly often. This patch handles the problem by mmap'ing the
>>> file in chunks of 64M at a time and copies events from the event buffers
>>> to the file avoiding write system calls.
>>>
>>> Before (with write syscall):
>>>
>>> perf record -o /tmp/perf.data -e raw_syscalls:*,sched:sched_switch -a -- sleep 1
>>> [ perf record: Woken up 0 times to write data ]
>>> [ perf record: Captured and wrote 81.843 MB /tmp/perf.data (~3575786 samples) ]
>>>
>>> After (using mmap):
>>>
>>> perf record -o /tmp/perf.data -e raw_syscalls:*,sched:sched_switch -a -- sleep 1
>>> [ perf record: Woken up 31 times to write data ]
>>> [ perf record: Captured and wrote 8.203 MB /tmp/perf.data (~358388 samples) ]
>>
>> Why do they have that different size?
>
> perf calls write() for each mmap, each time through the loop. Each
> write generates 2 events (syscall entry + exit) -- ie., generates more
> events. That's the negative feedback loop.

Aha, okay.  So it mostly matters to syscall tracing, right?  For a
normal record session, it seems that the effect is not that large:

Before:

  $ perf stat -r3 --null --sync -- perf record -a -- sleep 5
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.467 MB perf.data (~20420 samples) ]
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.544 MB perf.data (~23750 samples) ]
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.482 MB perf.data (~21073 samples) ]
  
   Performance counter stats for 'perf record -a -- sleep 5' (3 runs):
  
         5.174476094 seconds time elapsed                                          ( +-  0.07% )

  $ perf record -- perf bench sched pipe
  # Running sched/pipe benchmark...
  # Executed 1000000 pipe operations between two processes
  
       Total time: 21.271 [sec]
  
        21.271357 usecs/op
            47011 ops/sec
  [ perf record: Woken up 21 times to write data ]
  [ perf record: Captured and wrote 5.643 MB perf.data (~246524 samples) ]


After:

  $ perf stat -r3 --null --sync -- perf record -a -- sleep 5
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.459 MB perf.data (~20055 samples) ]
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.463 MB perf.data (~20230 samples) ]
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.467 MB perf.data (~20401 samples) ]
  
   Performance counter stats for 'perf record -a -- sleep 5' (3 runs):
  
         5.085910919 seconds time elapsed                                          ( +-  0.06% )

  $ perf record -- perf bench sched pipe
  # Running sched/pipe benchmark...
  # Executed 1000000 pipe operations between two processes
  
       Total time: 21.175 [sec]
  
        21.175406 usecs/op
            47224 ops/sec
  [ perf record: Woken up 21 times to write data ]
  [ perf record: Captured and wrote 5.612 MB perf.data (~245206 samples) ]


Thanks,
Namhyung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ