lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4351119f-b212-5039-9a3d-f568f6893b36@us.ibm.com>
Date:   Thu, 22 Sep 2016 13:23:04 -0500
From:   Paul Clarke <pc@...ibm.com>
To:     Vineet Gupta <Vineet.Gupta1@...opsys.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     Arnaldo Carvalho de Melo <acme@...hat.com>,
        Alexey Brodkin <Alexey.Brodkin@...opsys.com>,
        Will Deacon <Will.Deacon@....com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-perf-users@...r.kernel.org" <linux-perf-users@...r.kernel.org>,
        "linux-snps-arc@...ts.infradead.org" 
        <linux-snps-arc@...ts.infradead.org>, Jiri Olsa <jolsa@...hat.com>
Subject: Re: perf event grouping for dummies (was Re: [PATCH] arc: perf:
 Enable generic "cache-references" and "cache-misses" events)

On 09/22/2016 12:50 PM, Vineet Gupta wrote:
> On 09/22/2016 12:56 AM, Peter Zijlstra wrote:
>> On Wed, Sep 21, 2016 at 07:43:28PM -0500, Paul Clarke wrote:
>>> On 09/20/2016 03:56 PM, Vineet Gupta wrote:
>>>> On 09/01/2016 01:33 AM, Peter Zijlstra wrote:
>>>>>> - is that what perf event grouping is ?
>>>>>
>>>>> Again, nope. Perf event groups are single counter (so no implicit
>>>>> addition) that are co-scheduled on the PMU.
>>>>
>>>> I'm not sure I understand - does this require specific PMU/arch support - as in
>>>> multiple conditions feeding to same counter.
>>>
>>> My read is that is that what Peter meant was that each event in the
>>> perf event group is a single counter, so all the events in the group
>>> are counted simultaneously.  (No multiplexing.)
>>
>> Right, sorry for the poor wording.
>>
>>>> Again when you say co-scheduled what do you mean - why would anyone use the event
>>>> grouping - is it when they only have 1 counter and they want to count 2
>>>> conditions/events at the same time - isn't this same as event multiplexing ?
>>>
>>> I'd say it's the converse of multiplexing.  Instead of mapping
>>> multiple events to a single counter, perf event groups map a set of
>>> events each to their own counter, and they are active simultaneously.
>>> I suppose it's possible for the _groups_ to be multiplexed with other
>>> events or groups, but the group as a whole will be scheduled together,
>>> as a group.
>>
>> Correct.
>>
>> Each events get their own hardware counter. Grouped events are
>> co-scheduled on the hardware.
>
> And if we don't group them, then they _may_ not be co-scheduled (active/counting
> at the same time) ? But how can this be possible.
> Say we have 2 counters, both the cmds below
>
>      perf -e cycles,instructions hackbench
>      perf -e {cycles,instructions} hackbench
>
> would assign 2 counters to the 2 conditions which keep counting until perf asks
> them to stop (because the profiled application ended)
>
> I don't understand the "scheduling" of counter - once we set them to count, there
> is no real intervention/scheduling form software in terms of disabling/enabling
> (assuming no multiplexing etc)

If you assume no multiplexing, then this discussion on grouping is moot.

It depends on how many events you specify, how many counters there are, and which counters can count which events.  If you specify a set of events for which every event can be counted simultaneously, they will be scheduled simultaneously and continuously.  If you specify more events than counters, there's multiplexing.  AND, if you specify a set of events, some of which cannot be counted simultaneously due to hardware limitations, they'll be multiplexed.

PC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ