lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 2 Mar 2020 11:44:51 +0530
From:   kajoljain <kjain@...ux.ibm.com>
To:     Joakim Zhang <qiangqing.zhang@....com>,
        "acme@...nel.org" <acme@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
        Andi Kleen <ak@...ux.intel.com>
Cc:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-perf-users@...r.kernel.org" <linux-perf-users@...r.kernel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Kan Liang <kan.liang@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Jin Yao <yao.jin@...ux.intel.com>,
        Madhavan Srinivasan <maddy@...ux.vnet.ibm.com>,
        Anju T Sudhakar <anju@...ux.vnet.ibm.com>,
        Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
Subject: Re: [PATCH v4] tools/perf/metricgroup: Fix printing event names of
 metric group with multiple events incase of overlapping events



On 2/20/20 4:06 PM, Joakim Zhang wrote:
> 
>> -----Original Message-----
>> From: kajoljain <kjain@...ux.ibm.com>
>> Sent: 2020年2月20日 17:54
>> To: Joakim Zhang <qiangqing.zhang@....com>; acme@...nel.org
>> Cc: linux-kernel@...r.kernel.org; linux-perf-users@...r.kernel.org; Jiri Olsa
>> <jolsa@...nel.org>; Alexander Shishkin <alexander.shishkin@...ux.intel.com>;
>> Andi Kleen <ak@...ux.intel.com>; Kan Liang <kan.liang@...ux.intel.com>; Peter
>> Zijlstra <peterz@...radead.org>; Jin Yao <yao.jin@...ux.intel.com>; Madhavan
>> Srinivasan <maddy@...ux.vnet.ibm.com>; Anju T Sudhakar
>> <anju@...ux.vnet.ibm.com>; Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
>> Subject: Re: [PATCH v4] tools/perf/metricgroup: Fix printing event names of
>> metric group with multiple events incase of overlapping events
>>
>>
>>
>> On 2/17/20 8:41 AM, Joakim Zhang wrote:
>>>
>>>> -----Original Message-----
>>>> From: linux-perf-users-owner@...r.kernel.org
>>>> <linux-perf-users-owner@...r.kernel.org> On Behalf Of Kajol Jain
>>>> Sent: 2020年2月12日 13:41
>>>> To: acme@...nel.org
>>>> Cc: linux-kernel@...r.kernel.org; linux-perf-users@...r.kernel.org;
>>>> kjain@...ux.ibm.com; Jiri Olsa <jolsa@...nel.org>; Alexander Shishkin
>>>> <alexander.shishkin@...ux.intel.com>; Andi Kleen
>>>> <ak@...ux.intel.com>; Kan Liang <kan.liang@...ux.intel.com>; Peter
>>>> Zijlstra <peterz@...radead.org>; Jin Yao <yao.jin@...ux.intel.com>;
>>>> Madhavan Srinivasan <maddy@...ux.vnet.ibm.com>; Anju T Sudhakar
>>>> <anju@...ux.vnet.ibm.com>; Ravi Bangoria
>>>> <ravi.bangoria@...ux.ibm.com>
>>>> Subject: [PATCH v4] tools/perf/metricgroup: Fix printing event names
>>>> of metric group with multiple events incase of overlapping events
>>>>
>>>> Commit f01642e4912b ("perf metricgroup: Support multiple events for
>>>> metricgroup") introduced support for multiple events in a metric
>>>> group. But with the current upstream, metric events names are not
>>>> printed properly incase we try to run multiple metric groups with
>> overlapping event.
>>>>
>>>> With current upstream version, incase of overlapping metric events
>>>> issue is, we always start our comparision logic from start.
>>>> So, the events which already matched with some metric group also take
>>>> part in comparision logic. Because of that when we have overlapping
>>>> events, we end up matching current metric group event with already
>> matched one.
>>>>
>>>> For example, in skylake machine we have metric event CoreIPC and
>>>> Instructions. Both of them need 'inst_retired.any' event value.
>>>> As events in Instructions is subset of events in CoreIPC, they endup
>>>> in pointing to same 'inst_retired.any' value.
>>>>
>>>> In skylake platform:
>>>>
>>>> command:# ./perf stat -M CoreIPC,Instructions  -C 0 sleep 1
>>>>
>>>>  Performance counter stats for 'CPU(s) 0':
>>>>
>>>>      1,254,992,790      inst_retired.any          # 1254992790.0
>>>>
>> Instructions
>>>>                                                   #      1.3
>>>> CoreIPC
>>>>        977,172,805      cycles
>>>>      1,254,992,756      inst_retired.any
>>>>
>>>>        1.000802596 seconds time elapsed
>>>>
>>>> command:# sudo ./perf stat -M UPI,IPC sleep 1
>>>>
>>>>    Performance counter stats for 'sleep 1':
>>>>
>>>>            948,650      uops_retired.retire_slots
>>>>            866,182      inst_retired.any          #      0.7 IPC
>>>>            866,182      inst_retired.any
>>>>          1,175,671      cpu_clk_unhalted.thread
>>>>
>>>> Patch fixes the issue by adding a new bool pointer 'evlist_used' to
>>>> keep track of events which already matched with some group by setting it
>> true.
>>>> So, we skip all used events in list when we start comparision logic.
>>>> Patch also make some changes in comparision logic, incase we get a
>>>> match miss, we discard the whole match and start again with first
>>>> event id in metric event.
>>>>
>>>> With this patch:
>>>> In skylake platform:
>>>>
>>>> command:# ./perf stat -M CoreIPC,Instructions  -C 0 sleep 1
>>>>
>>>>  Performance counter stats for 'CPU(s) 0':
>>>>
>>>>          3,348,415      inst_retired.any          #      0.3
>> CoreIPC
>>>>         11,779,026      cycles
>>>>          3,348,381      inst_retired.any          # 3348381.0
>>>>
>> Instructions
>>>>
>>>>        1.001649056 seconds time elapsed
>>>>
>>>> command:# ./perf stat -M UPI,IPC sleep 1
>>>>
>>>>  Performance counter stats for 'sleep 1':
>>>>
>>>>          1,023,148      uops_retired.retire_slots #      1.1 UPI
>>>>            924,976      inst_retired.any
>>>>            924,976      inst_retired.any          #      0.6 IPC
>>>>          1,489,414      cpu_clk_unhalted.thread
>>>>
>>>>        1.003064672 seconds time elapsed
>>>>
>>>> Signed-off-by: Kajol Jain <kjain@...ux.ibm.com>
>>>> Cc: Jiri Olsa <jolsa@...nel.org>
>>>> Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
>>>> Cc: Andi Kleen <ak@...ux.intel.com>
>>>> Cc: Kan Liang <kan.liang@...ux.intel.com>
>>>> Cc: Peter Zijlstra <peterz@...radead.org>
>>>> Cc: Jin Yao <yao.jin@...ux.intel.com>
>>>> Cc: Madhavan Srinivasan <maddy@...ux.vnet.ibm.com>
>>>> Cc: Anju T Sudhakar <anju@...ux.vnet.ibm.com>
>>>> Cc: Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
>>>> ---
>>>>  tools/perf/util/metricgroup.c | 50
>>>> ++++++++++++++++++++++-------------
>>>>  1 file changed, 31 insertions(+), 19 deletions(-)
>>>
>>> Hi Kajol,
>>>
>>> I am not sure if it is good to ask a question here :-)
>>>
>>> I encountered a perf metricgroup issue, the result is incorrect when the
>> metric includes more than 2 events.
>>>
>>> git log --oneline tools/perf/util/metricgroup.c
>>> 3635b27cc058 perf metricgroup: Fix printing event names of metric
>>> group with multiple events f01642e4912b perf metricgroup: Support
>>> multiple events for metricgroup
>>> 287f2649f791 perf metricgroup: Scale the metric result
>>>
>>> I did a simple test, below is the JSON file and result.
>>> [
>>>         {
>>>              "PublicDescription": "Calculate DDR0 bus actual utilization
>> which vary from DDR0 controller clock frequency",
>>>              "BriefDescription": "imx8qm: ddr0 bus actual utilization",
>>>              "MetricName": "imx8qm-ddr0-bus-util",
>>>              "MetricExpr": "( imx8_ddr0\\/read\\-cycles\\/ +
>> imx8_ddr0\\/write\\-cycles\\/ )",
>>>              "MetricGroup": "i.MX8QM_DDR0_BUS_UTIL"
>>>         }
>>> ]
>>> ./perf stat -I 1000 -M imx8qm-ddr0-bus-util
>>> #           time             counts unit events
>>>      1.000104250              16720      imx8_ddr0/read-cycles/
>> #  22921.0 imx8qm-ddr0-bus-util
>>>      1.000104250               6201      imx8_ddr0/write-cycles/
>>>      2.000525625               8316      imx8_ddr0/read-cycles/
>> #  12785.5 imx8qm-ddr0-bus-util
>>>      2.000525625               2738      imx8_ddr0/write-cycles/
>>>      3.000819125               1056      imx8_ddr0/read-cycles/
>> #   4136.7 imx8qm-ddr0-bus-util
>>>      3.000819125                303      imx8_ddr0/write-cycles/
>>>      4.001103750               6260      imx8_ddr0/read-cycles/
>> #   9149.8 imx8qm-ddr0-bus-util
>>>      4.001103750               2317      imx8_ddr0/write-cycles/
>>>      5.001392750               2084      imx8_ddr0/read-cycles/
>> #   4516.0 imx8qm-ddr0-bus-util
>>>      5.001392750                601      imx8_ddr0/write-cycles/
>>>
>>> You can see that only the first result is correct, could this be reproduced at
>> you side?
>>
>> Hi Joakim,
>>         Will try to look into it from my side.
> 

> Thanks Kajol for your help, I look into this issue, but don't know how to fix it.
> 
> The results are always correct if signal event used in "MetricExpr" with "-I" parameters, but the results are incorrect when more than one events used in "MetricExpr".
> 

Hi Joakim,
    So, I try to look into this issue and understand the flow. From my understanding, whenever we do
    calculation of metric expression we don't use exact count we are getting.
    Basically we use mean value of each event in the calculation of metric expression.

So, I am taking same example you refer.

Metric Event: imx8qm-ddr0-bus-util
MetricExpr": "( imx8_ddr0\\/read\\-cycles\\/ + imx8_ddr0\\/write\\-cycles\\/ )"

command#: ./perf stat -I 1000 -M imx8qm-ddr0-bus-util

#           time             counts unit events
     1.000104250              16720      imx8_ddr0/read-cycles/    #  22921.0 imx8qm-ddr0-bus-util
     1.000104250               6201      imx8_ddr0/write-cycles/
     2.000525625               8316      imx8_ddr0/read-cycles/    #  12785.5 imx8qm-ddr0-bus-util
     2.000525625               2738      imx8_ddr0/write-cycles/
     3.000819125               1056      imx8_ddr0/read-cycles/    #   4136.7 imx8qm-ddr0-bus-util
     3.000819125                303      imx8_ddr0/write-cycles/
     4.001103750               6260      imx8_ddr0/read-cycles/    #   9149.8 imx8qm-ddr0-bus-util
     4.001103750               2317      imx8_ddr0/write-cycles/
     5.001392750               2084      imx8_ddr0/read-cycles/    #   4516.0 imx8qm-ddr0-bus-util
     5.001392750                601      imx8_ddr0/write-cycles/

If you see we have a function called 'update_stats' in file util/stat.c where we do this calculation
and updating stats->mean value. And this mean value is what we are using actually in our
metric expression calculation.

We call this function in each iteration where we update stats->mean and stats->n for each event.
But one weird issue is, for very first event, stat->n is always 1 that is why we are getting 
mean same as count.
So this is the reason for single event you get exact aggregate of metric expression.
So doesn't matter how many events you have in your metric expression, every time
you take exact count for first one and normalized value for rest which is weird.

According to update_stats function:  We are updating mean as:

stats->mean += delta / stats->n where,  delta = val - stats->mean. 

If we take write-cycles here. Initially mean = 0 and n = 1.

1st iteration: n=1, write cycle : 6201 and mean = 6201  (Final agg value: 16720 + 6201 = 22921)
2nd iteration: n=2, write cycles:  6201 + (2738 - 6201)/2 =  4469.5  (Final aggr value: 8316 + 4469.5 = 12785.5)
3rd iteration: n=3, write cycles: 4469.5 + (303 - 4469.5)/3 = 3080.6667 (Final aggr value: 1056 + 3080.6667 = 4136.7)

Andi and Jiri, I am not sure if its expected behavior. I mean shouldn't we either take mean value of each event 
or take n as 1 for each event. And one more question, Should we add an option to say whether user want exact aggregate or
this normalize aggregate to remove the confusion? I try to find it out if we already have one but didn't get.
Please let me know if my understanding is fine.

Thanks,
Kajol


> Hope you can find the root cause :-)
> 
> Best Regards,
> Joakim Zhang
>> Thanks,
>> Kajol
>>>
>>> Thanks a lot!
>>>
>>> Best Regards,
>>> Joakim Zhang
>>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ