lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 26 Oct 2018 15:42:55 -0300
From:   Arnaldo Carvalho de Melo <acme@...nel.org>
To:     David Miller <davem@...emloft.net>
Cc:     linux-kernel@...r.kernel.org, Wang Nan <wangnan0@...wei.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Namhyung Kim <namhyung@...nel.org>,
        Kan Liang <kan.liang@...el.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Jin Yao <yao.jin@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: A concern about overflow ring buffer mode

Em Fri, Oct 26, 2018 at 03:38:05PM -0300, Arnaldo Carvalho de Melo escreveu:
> Addind a few folks to the CC list, Wang implemented the backwards ring
> buffer code.

Adding a few more, since the patch switching 'perf top' to overwrite
mode and the motivation for doing so is this one:

commit ebebbf082357f86cc84a4d46ce897a5750e41b7a
Author: Kan Liang <kan.liang@...el.com>
Date:   Thu Jan 18 13:26:31 2018 -0800

    perf top: Switch default mode to overwrite mode
    
    perf_top__mmap_read() has a severe performance issue in the Knights
    Landing/Mill platform, when monitoring heavy load systems. It costs
    several minutes to finish, which is unacceptable.

    Currently, 'perf top' uses the non overwrite mode. For non overwrite
    mode, it tries to read everything in the ringbuffer and doesn't pause
    it. Once there are lots of samples delivered persistently, the
    processing time could be very long. Also, the latest samples could be
    lost when the ringbuffer is full.
    
    For overwrite mode, it takes a snapshot for the system by pausing the
    ringbuffer, which could significantly reduce the processing time.  Also,
    the overwrite mode always keep the latest samples.  Considering the real
    time requirement for 'perf top', the overwrite mode is more suitable for
    it.
    
    Actually, 'perf top' was overwrite mode. It is changed to non overwrite
    mode since commit 93fc64f14472 ("perf top: Switch to non overwrite
    mode"). It's better to change it back to overwrite mode by default.
    
    For the kernel which doesn't support overwrite mode, it will fall back
    to non overwrite mode.
    
    There would be some records lost in overwrite mode because of pausing
    the ringbuffer. It has little impact for the accuracy of the snapshot
    and can be tolerated.
    
    For overwrite mode, unconditionally wait 100 ms before each snapshot. It
    also reduces the overhead caused by pausing ringbuffer, especially on
    light load system.
    
    Signed-off-by: Kan Liang <kan.liang@...el.com>
    Acked-by: Jiri Olsa <jolsa@...nel.org>
    Tested-by: Arnaldo Carvalho de Melo <acme@...hat.com>
    Cc: Andi Kleen <ak@...ux.intel.com>
    Cc: Jin Yao <yao.jin@...ux.intel.com>
    Cc: Namhyung Kim <namhyung@...nel.org>
    Cc: Peter Zijlstra <peterz@...radead.org>
    Cc: Wang Nan <wangnan0@...wei.com>
    Link: http://lkml.kernel.org/r/1516310792-208685-17-git-send-email-kan.liang@intel.com
    Signed-off-by: Arnaldo Carvalho de Melo <acme@...hat.com>

 
> Em Fri, Oct 26, 2018 at 10:45:13AM -0700, David Miller escreveu:
> > Since the last time I looked deeply into perf I notice that
> > perf top now uses a new ring buffer mode by default.
> > 
> > Basically, events are written in reverse order, and when fetching
> > events the tool uses an ioctl to "pause" the ring buffer.
> > 
> > I understand some of the reasons for pursing this kind of scheme but I
> > think there may be a huge downside to this design.
> > 
> > Yes, if the tool can't keep up with the kernel, we'd rather see newer
> > rather than older events.
> > 
> > However, pausing the ring buffer during the fetch is going to
> > virtually guaratee that we lose critical events that impact
> > interpretation of future events in a non-recoverable way.
> > 
> > The thing is, the new scheme causes events to be lost even if the tool
> > can keep up with the kernel.
> > 
> > Any event that happens while the tool is fetching the ring entries
> > will be lost forever.  The kernel simply skips queuing up the event
> > and increments a lost counter.  During a kernel build, I typically see
> > 9 or so events lost each fetch.
> > 
> > Ok, if this is just a SAMPLE then fine, it's not a big deal.
> > 
> > But what if the lost event is a FORK or an EXEC or the worst one to
> > lose, an MMAP?
> 
> Right, we can't lose those, so for using this, we need something like
> the intel_pt tooling code does, i.e. add an extra event to the mix, a
> software event, "dummy", that then gets used to track just the
> PERF_RECORD_!SAMPLE metadata events and then this one never gets paused.
> 
> The intel pt motivation is different, but the technique perhaps will
> allow for using the backward code while not losing metadata events.
> 
> wdyt? Wang?
> 
> - Arnaldo
>  
> > Now we can't even match up events properly and we get tons of those
> > dreaded "Unknown" symbols and DSOs.  The output looks terrible and the
> > tool becomes useless.
> > 
> > And yes this happens frequently.
> > 
> > I think the overwrite ring buffer mode should be seriously
> > reconsidered.  The "I'd rather see new than old events" part is fine,
> > but the "pause" part is not.  You can't turn event recording off on
> 
> > the kernel side while you fetch some events, because it means that
> > critical events that allow us to properly interpret future events will
> > be lost.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ