lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180911141907.GV24106@hirez.programming.kicks-ass.net>
Date:   Tue, 11 Sep 2018 16:19:07 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Alexey Budankov <alexey.budankov@...ux.intel.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v8 0/3]: perf: reduce data loss when profiling highly
 parallel CPU bound workloads

On Tue, Sep 11, 2018 at 08:35:12AM +0200, Ingo Molnar wrote:
> > Well, explicit threading in the tool for AIO, in the simplest case, means 
> > incorporating some POSIX API implementation into the tool, avoiding 
> > code reuse in the first place. That tends to be error prone and costly.
> 
> It's a core competency, we better do it right and not outsource it.
> 
> Please take a look at Jiri's patches (once he re-posts them), I think it's a very good 
> starting point.

There's another reason for doing custom per-cpu threads; it avoids
bouncing the buffer memory around the machine. If the task doing the
buffer reads is the exact same as the one doing the writes, there's less
memory traffic on the interconnects.

Also, I think we can avoid the MFENCE in that case, but I'm not sure
that one is hot enough to bother about on the perf reading side of
things.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ