lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181026183805.GD3353@kernel.org>
Date:   Fri, 26 Oct 2018 15:38:05 -0300
From:   Arnaldo Carvalho de Melo <acme@...nel.org>
To:     David Miller <davem@...emloft.net>
Cc:     linux-kernel@...r.kernel.org, Wang Nan <wangnan0@...wei.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Namhyung Kim <namhyung@...nel.org>
Subject: Re: A concern about overflow ring buffer mode

Addind a few folks to the CC list, Wang implemented the backwards ring
buffer code.

Em Fri, Oct 26, 2018 at 10:45:13AM -0700, David Miller escreveu:
> Since the last time I looked deeply into perf I notice that
> perf top now uses a new ring buffer mode by default.
> 
> Basically, events are written in reverse order, and when fetching
> events the tool uses an ioctl to "pause" the ring buffer.
> 
> I understand some of the reasons for pursing this kind of scheme but I
> think there may be a huge downside to this design.
> 
> Yes, if the tool can't keep up with the kernel, we'd rather see newer
> rather than older events.
> 
> However, pausing the ring buffer during the fetch is going to
> virtually guaratee that we lose critical events that impact
> interpretation of future events in a non-recoverable way.
> 
> The thing is, the new scheme causes events to be lost even if the tool
> can keep up with the kernel.
> 
> Any event that happens while the tool is fetching the ring entries
> will be lost forever.  The kernel simply skips queuing up the event
> and increments a lost counter.  During a kernel build, I typically see
> 9 or so events lost each fetch.
> 
> Ok, if this is just a SAMPLE then fine, it's not a big deal.
> 
> But what if the lost event is a FORK or an EXEC or the worst one to
> lose, an MMAP?

Right, we can't lose those, so for using this, we need something like
the intel_pt tooling code does, i.e. add an extra event to the mix, a
software event, "dummy", that then gets used to track just the
PERF_RECORD_!SAMPLE metadata events and then this one never gets paused.

The intel pt motivation is different, but the technique perhaps will
allow for using the backward code while not losing metadata events.

wdyt? Wang?

- Arnaldo
 
> Now we can't even match up events properly and we get tons of those
> dreaded "Unknown" symbols and DSOs.  The output looks terrible and the
> tool becomes useless.
> 
> And yes this happens frequently.
> 
> I think the overwrite ring buffer mode should be seriously
> reconsidered.  The "I'd rather see new than old events" part is fine,
> but the "pause" part is not.  You can't turn event recording off on

> the kernel side while you fetch some events, because it means that
> critical events that allow us to properly interpret future events will
> be lost.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ