lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180910135858.GE5147@kernel.org>
Date:   Mon, 10 Sep 2018 10:58:58 -0300
From:   Arnaldo Carvalho de Melo <acme@...nel.org>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Alexey Budankov <alexey.budankov@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v8 0/3]: perf: reduce data loss when profiling highly
 parallel CPU bound workloads

Em Mon, Sep 10, 2018 at 02:06:43PM +0200, Ingo Molnar escreveu:
> * Alexey Budankov <alexey.budankov@...ux.intel.com> wrote:
> > On 10.09.2018 12:18, Ingo Molnar wrote:
> > > * Alexey Budankov <alexey.budankov@...ux.intel.com> wrote:
> > >> Currently in record mode the tool implements trace writing serially. 
> > >> The algorithm loops over mapped per-cpu data buffers and stores 
> > >> ready data chunks into a trace file using write() system call.
> > >>
> > >> At some circumstances the kernel may lack free space in a buffer 
> > >> because the other buffer's half is not yet written to disk due to 
> > >> some other buffer's data writing by the tool at the moment.
> > >>
> > >> Thus serial trace writing implementation may cause the kernel 
> > >> to loose profiling data and that is what observed when profiling 
> > >> highly parallel CPU bound workloads on machines with big number 
> > >> of cores.
> > > 
> > > Yay! I saw this frequently on a 120-CPU box (hw is broken now).
> > > 
> > >> Data loss metrics is the ratio lost_time/elapsed_time where 
> > >> lost_time is the sum of time intervals containing PERF_RECORD_LOST 
> > >> records and elapsed_time is the elapsed application run time 
> > >> under profiling.
> > >>
> > >> Applying asynchronous trace streaming thru Posix AIO API
> > >> (http://man7.org/linux/man-pages/man7/aio.7.html) 
> > >> lowers data loss metrics value providing 2x improvement -
> > >> lowering 98% loss to almost 0%.
> > > 
> > > Hm, instead of AIO why don't we use explicit threads instead? I think Posix AIO will fall back 
> > > to threads anyway when there's no kernel AIO support (which there probably isn't for perf 
> > > events).
> > 
> > Explicit threading is surely an option but having more threads 
> > in the tool that stream performance data is a considerable 
> > design complication.
> > 
> > Luckily, glibc AIO implementation is already based on pthreads, 
> > but having a writing thread for every distinct fd only.
> 
> My argument is, we don't want to rely on glibc's choices here. They might
> use a different threading design in the future, or it might differ between
> libc versions.
> 
> The basic flow of tracing/profiling data is something we should control explicitly,
> via explicit threading.
> 
> BTW., the usecase I was primarily concentrating on was a simpler one: 'perf record -a', not 
> inherited workflow tracing. For system-wide profiling the ideal tracing setup is clean per-CPU 
> separation, i.e. per CPU event fds, per CPU threads that read and then write into separate 
> per-CPU files.

My main request here is that we think about the 'perf top' and 'perf
trace' workflows as well when working on this, i.e. that we don't take
for granted that we'll have the perf.data files to work with.

I.e. N threads, that periodically use that FINISHED_ROUND event to order
events and go on consuming. All of the objects already have refcounts
and locking to allow for things like decaying of samples to take care of
trowing away no longer needed objects (struct map, thread, dso, symbol
tables, etc) to trim memory usage.

- Arnaldo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ