lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 10 Sep 2018 13:45:07 +0300
From:   Alexey Budankov <alexey.budankov@...ux.intel.com>
To:     Jiri Olsa <jolsa@...hat.com>, Ingo Molnar <mingo@...nel.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v8 0/3]: perf: reduce data loss when profiling highly
 parallel CPU bound workloads

Hi,

On 10.09.2018 13:23, Jiri Olsa wrote:
> On Mon, Sep 10, 2018 at 12:13:25PM +0200, Ingo Molnar wrote:
>>
>> * Jiri Olsa <jolsa@...hat.com> wrote:
>>
>>> On Mon, Sep 10, 2018 at 12:03:03PM +0200, Ingo Molnar wrote:
>>>>
>>>> * Jiri Olsa <jolsa@...hat.com> wrote:
>>>>
>>>>>> Per-CPU threading the record session would have so many other advantages as well (scalability, 
>>>>>> etc.).
>>>>>>
>>>>>> Jiri did per-CPU recording patches a couple of months ago, not sure how usable they are at the 
>>>>>> moment?
>>>>>
>>>>> it's still usable, I can rebase it and post a branch pointer,
>>>>> the problem is I haven't been able to find a case with a real
>>>>> performance benefit yet.. ;-)
>>>>>
>>>>> perhaps because I haven't tried on server with really big cpu
>>>>> numbers
>>>>
>>>> Maybe Alexey could pick up from there? Your concept looked fairly mature to me
>>>> and I tried it on a big-CPU box back then and there were real improvements.
>>>
>>> too bad u did not share your results, it could have been already in ;-)
>>
>> Yeah :-/ Had a proper round of testing on my TODO, then the big box I'd have tested it on
>> broke ...
>>
>>> let me rebase/repost once more and let's see
>>
>> Thanks!
>>
>>> I think we could benefit from both multiple threads event reading
>>> and AIO writing for perf.data.. it could be merged together
>>
>> So instead of AIO writing perf.data, why not just turn perf.data into a directory structure 
>> with per CPU files? That would allow all sorts of neat future performance features such as 
> 
> that's basically what the multiple-thread record patchset does

Re-posting part of my answer here...

Please note that tool threads may contend, and actually do, with 
application threads, under heavy load when all CPU cores are utilized,
and this may alter performance profile.

So this or that tool design is also a matter of proper system balancing
when profiling so that the gathered performance data would be actual.

Thanks,
Alexey

> 
> jirka
> 
>> mmap() or splice() based zero-copy.
>>
>> User-space post-processing can then read the files and put them into global order - or use the 
>> per CPU nature of them, which would be pretty useful too.
>>
>> Also note how well this works on NUMA as well, as the backing pages would be allocated in a 
>> NUMA-local fashion.
>>
>> I.e. the whole per-CPU threading would enable such a separation of the tracing/event streams 
>> and would allow true scalability.
>>
>> Thanks,
>>
>> 	Ingo
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ