lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171025090750.3kt3dtonrjl7gmgr@gmail.com>
Date:   Wed, 25 Oct 2017 11:07:50 +0200
From:   Ingo Molnar <mingo@...nel.org>
To:     Jiri Olsa <jolsa@...hat.com>
Cc:     "Liang, Kan" <kan.liang@...el.com>,
        "acme@...nel.org" <acme@...nel.org>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "jolsa@...nel.org" <jolsa@...nel.org>,
        "wangnan0@...wei.com" <wangnan0@...wei.com>,
        "hekuang@...wei.com" <hekuang@...wei.com>,
        "namhyung@...nel.org" <namhyung@...nel.org>,
        "alexander.shishkin@...ux.intel.com" 
        <alexander.shishkin@...ux.intel.com>,
        "Hunter, Adrian" <adrian.hunter@...el.com>,
        "ak@...ux.intel.com" <ak@...ux.intel.com>
Subject: Re: [PATCH V3 0/6] event synthesization multithreading for perf
 record


* Jiri Olsa <jolsa@...hat.com> wrote:

> On Tue, Oct 24, 2017 at 02:59:44PM +0200, Ingo Molnar wrote:
> > 
> > * Jiri Olsa <jolsa@...hat.com> wrote:
> > 
> > > I recently made some changes on threaded record, which are based
> > > on Namhyungs time* API, which is needed to read/sort the data afterwards
> > > 
> > > but I wasn't able to get any substantial and constant reduce of LOST events
> > > and then I got sidetracked and did not finish, but it's in here:
> > 
> > So, in the context of system-wide profiling, the way that would work best I think 
> > is the following:
> > 
> >   thread #0 binds itself to CPU#0 (via sched_setaffinity) and creates a per-CPU event on CPU#0
> >   thread #1 binds itself to CPU#1 (via sched_setaffinity) and creates a per-CPU event on CPU#1
> >   thread #2 binds itself to CPU#2 (via sched_setaffinity) and creates a per-CPU event on CPU#2
> > 
> > etc.
> > 
> > Is this how you implemented it?
> 
> in a way ;-) but I made it more generic and let record create just
> few threads and let them share cpu subset.. and so there was no binding
> 
> > 
> > If the threads in the thread pool are just free-running then the scheduler might 
> > not migrate it to the 'right' CPU that is streaming the perf events and there will 
> > be a lot of cross-talking between CPUs.
> 
> ok it's easy to add binding now and 1:1 thread:cpu mapping.. I'll retry

Please Cc: me - this is a really interesting aspect of perf scalability!

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ